The iron ML notebook
  • The iron data science notebook
  • ML & Data Science
    • Frequent Questions
      • Discriminative vs Generative models
      • Supervised vs Unsupervised learning
      • Batch vs Online Learning
      • Instance-based vs Model-based Learning
      • Bias-Variance Tradeoff
      • Probability vs Likelihood
      • Covariance vs Correlation Matrix
      • Precision vs Recall
      • How does a ROC curve work?
      • Ridge vs Lasso
      • Anomaly detection methods
      • How to deal with imbalanced datasets?
      • What is "Statistically Significant"?
      • Recommendation systems methods
    • Statistics
      • The basics
      • Distributions
      • Sampling
      • IQR
      • Z-score
      • F-statistic
      • Outliers
      • The bayesian basis
      • Statistic vs Parameter
      • Markov Monte Carlo Chain
    • ML Techniques
      • Pre-process
        • PCA
      • Loss functions
      • Regularization
      • Optimization
      • Metrics
        • Distance measures
      • Activation Functions
      • Selection functions
      • Feature Normalization
      • Cross-validation
      • Hyperparameter tuning
      • Ensemble methods
      • Hard negative mining
      • ML Serving
        • Quantization
        • Kernel Auto-Tuning
        • NVIDIA TensorRT vs ONNX Runtime
    • Machine Learning Algorithms
      • Supervised Learning
        • Support Vector Machines
        • Adaptative boosting
        • Gradient boosting
        • Regression algorithms
          • Linear Regression
          • Lasso regression
          • Multi Layer Perceptron
        • Classification algorithms
          • Perceptron
          • Logistic Regression
          • Multilayer Perceptron
          • kNN
          • Naive Bayes
          • Decision Trees
          • Random Forest
          • Gradient Boosted Trees
      • Unsupervised learning
        • Clustering
          • Clustering metrics
          • kMeans
          • Gaussian Mixture Model
          • Hierarchical clustering
          • DBSCAN
      • Cameras
        • Intrinsic and extrinsic parameters
    • Computer Vision
      • Object Detection
        • Two-Stage detectors
          • Traditional Detection Models
          • R-CNN
          • Fast R-CNN
          • Faster R-CNN
        • One-Stage detectors
          • YOLO
          • YOLO v2
          • YOLO v3
          • YOLOX
        • Techniques
          • NMS
          • ROI Pooling
        • Metrics
          • Objectness Score
          • Coco Metrics
          • IoU
      • MOT
        • SORT
        • Deep SORT
  • Related Topics
    • Intro
    • Python
      • Global Interpreter Lock (GIL)
      • Mutability
      • AsyncIO
    • SQL
    • Combinatorics
    • Data Engineering Questions
    • Distributed computation
      • About threads & processes
      • REST vs gRPC
  • Algorithms & data structures
    • Array
      • Online Stock Span
      • Two Sum
      • Best time to by and sell stock
      • Rank word combination
      • Largest subarray with zero sum
    • Binary
      • Sum of Two Integers
    • Tree
      • Maximum Depth of Binary Tree
      • Same Tree
      • Invert/Flip Binary Tree
      • Binary Tree Paths
      • Binary Tree Maximum Path Sum
    • Matrix
      • Set Matrix Zeroes
    • Linked List
      • Reverse Linked List
      • Detect Cycle
      • Merge Two Sorted Lists
      • Merge k Sorted Lists
    • String
      • Longest Substring Without Repeating Characters
      • Longest Repeating Character Replacement
      • Minimum Window Substring
    • Interval
    • Graph
    • Heap
    • Dynamic Programming
      • Fibonacci
      • Grid Traveler
      • Can Sum
      • How Sum
      • Best Sum
      • Can Construct
      • Count Construct
      • All Construct
      • Climbing Stairs
Powered by GitBook
On this page

Was this helpful?

  1. Related Topics
  2. Python

AsyncIO

PreviousMutabilityNextSQL

Last updated 3 years ago

Was this helpful?

Sources:

  • Don’t have to use semaphores, you get access to shared memory, and it’s relatively easy to code

  • For applications with lots of IO, the savings can be substantial

Await coroutines

import asyncio
import time

def write(msg):
    print(msg, flush=True)

async def say1():
    await asyncio.sleep(1)
    write("Hello 1!")

async def say2():
    await asyncio.sleep(1)
    write("Hello 2!")

write("start")
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(
    say1(),
    say2()
))
write("exit")

loop.close()
  1. When run_until_complete runs say1 function, the interpreter executes it line by line, and when it sees await, it starts asynchronous operation which later will be finished with some internal callback to loop (such callback hidden from us, developers).

  2. But now, after the start, it immediately returns control to the event loop.

    1. So it starts asynchronous sleep and our loop has control, so the loop is actually ready to start the next function say2.

    2. When first async sleep is finished, it makes an internal callback to loop (hidden from us) and loop resumes execution of say1 coroutine: next operation is printing Hello 1!. After printing it returns again to the event loop.

  3. At the same time, from the second sleep, the loop receives an event about finishing the second sleep (if 2 events will come at the same time they will not be lost, they will be just queued).

    1. So now Hello 2! is printed and the second method is also returned. run_until_complete(gather(l1,l2,l3)) will block until all l1, l2, l3 coroutines will be done.

It can be displayed as next (assume that all red lines are at 0s time point, and all blue at 1s):

Parallel execution of asyncio functions
Python 3's Killer Feature: asyncio