The python await and async is one of the more advanced features to help run your programs faster by making sure the CPU is spending as little time as possible waiting and instead as much time as possible working. If ever you see a capable chef, you’ll know what I mean. The chef is not just following a recipe step by step (i.e. working synchronously), the chef is boiling water to cook the pasta , measuring the amount of pasta, chopping tomatoes for the pasta sauce until the water boils etc (i.e. the chef is working asynchronously). The chef is minimizing the time they are waiting idle and always working on a task. That’s the same idea with async and await.
For this tutorial, we will focus on python 3.7 as it has some of the more modern features of await and async. We will call out some of the differences for python 3.4 – 3.6.
What is async await in Python?
The async await keywords help to define in your program which parts need to run sequentially, and which parts may take sometime but other parts of the program can execute while this step completes. A modern example of this is that if you’re downloading a web page it may take a few seconds, while the download is happening you can execute other parts of your program.
How does async await work in Python?
Sometimes the best way to explain something is to show how you would achieve the same thing without the feature.
Continuing with the restaurant theme, suppose you are running a hamburger stall (you’re the waiter and the chef) and it is almost instant to collect payment for a customer and serve the final hamburger, but the most time consuming task is to cooking the beef patty which takes 2 seconds (one could only wish!).
See the below diagram:
In the above diagram:
- Step 1: you would first get the order and collect the money from Customer 1
- Step 2: you would then put a beef patty on the cook top and then wait for 2 seconds for the beef patty to cook. At the same time, Customer 1 is also waiting for 2 seconds.
- Step 3: when the beef patty is cooked, you can then plate this onto a hamburger bun
- Step 4: pass the final hamburger to Customer 1
- Step 5: You would then start to serve Customer 2 (who has already been waiting 2 seconds for you to serve Customer 1). You can then repeat steps 2-4
With the above approach, Customer 1 would have their burger in about 2 seconds, Customer 2 approx 4 seconds, and then Customer 3 approx 6 seconds.
The equivalent code would be as follows:
import time, datetime, timeit
customer_queue = [ "C1", "C2", "C3" ]
def get_next_customer():
return customer_queue.pop(0) #Get the first customer from list
def cook_hamburger(customer):
start_customer_timer = timeit.default_timer()
print( f"[{customer}]: Start cooking hamberger for customer")
time.sleep(2) # It takes 2 seconds to cook the hamburger
end_customer_timer = timeit.default_timer()
print( f"[{customer}]: Finish cooking hamberger for customer. Total {end_customer_timer-start_customer_timer} seconds\n")
def run_shop():
while customer_queue:
curr_customer = get_next_customer()
cook_hamburger(curr_customer)
def main():
print('Hamburger Shop')
start = timeit.default_timer()
run_shop()
stop = timeit.default_timer()
print(f"** Total runtime: {stop-start} seconds ***")
if __name__ == '__main__':
main()
The code above is fairly straightforward. We have a list of customers that are queuing in the list customer_queue
which are being looped under the def run_shop()
. For each customer (get_next_customer()
), we call cook_hamburger()
to cook the hamburger for 2 seconds and wait for it to complete.
Running this code you would get the following output:
As expected, the total runtime for 3 customers is 6 seconds since each customer is served sequentially.
Cooking Hamburgers Asynchronously and coding the event loop manually
Instead of serving the customer and cooking the hamburger for each customer, you can obviously do some of the tasks asynchronously, meaning you can start the task but you don’t have to sit and wait, you can do something else. See the following diagram where the chef/waiter is serving multiple customers and cooking at the same time. It’s not explicitly shown here, but the chef/waiter is constantly checking on the status of the next task and if a task doesn’t require his/her attention they’ll move on to the next task. This process of always looking for something to do is the equivalent of the “event loop”. The Event Loop is a programming construct where the logic is to always look for a task to execute and if there’s a task which will take some time it can release control to the next task in the loop.
In the above example, the following is happening:
- Step 1: you would first get the order and collect the money from Customer 1
- Step 2: you would then put a beef patty on the cook top and then let it cook, then immediately move on to the next customer while the patty is cooking.
- Step 3: you would first get the order and collect the money from Customer 2. You would also check if the first beef patty has completed cooking yet.
- Step 4: you would then put another beef patty on the cook top and then let it cook, then immediately move on to the next customer while the patty is cooking.
- …
- Step 5: When any of the beef patties are done, you would plate it
- Step 6: Pass the plated hamburger to the respective customer. Note, in the above example we’ve assumed it to be Customer 1, but it could be any customer depending on which beef patty cooked fully first.
- Step 7: When any of the beef patties are done, you would plate it, and server
This is the equivalent of the event loop. The chef/waiter is constantly checking if it needs to serve the customer or check on the hamburgers which are cooking. When there’s a hamburger is placed on the stove and we need to wait 2 seconds, the chef/waiter moves to the next task and does not wait for the 2 seconds to complete. When the hamburger is done, it is then served to the customer.
How can this be done programatically? Glad you asked:
import time ,datetime, timeit
customer_queue = [ "C1", "C2", "C3" ]
hamburger_queue = []
def get_next_customer():
if customer_queue: return customer_queue.pop(0) #Get the first customer from list
return None
def start_cooking_hamburger(customer):
print( f"[{customer}]: Start cooking hamberger for customer")
hamburger = { "customer":customer, "start_cooking_time": timeit.default_timer(), "cooked":False}
hamburger_queue.append( hamburger )
def check_hamburger_status():
curr_timer = timeit.default_timer()
#Check if it's cooking, but release control
for index, hamburger in enumerate(hamburger_queue):
elapsed_time = curr_timer-hamburger['start_cooking_time']
if elapsed_time > 2: #2 second has passed for hamrburger to cook
print( f"[{hamburger['customer']}]: Finish cooking hamberger for customer. Total {elapsed_time} seconds\n")
del hamburger_queue[ index]. #delete from list to mark as done
def run_shop():
while customer_queue or hamburger_queue: #Event loop
curr_customer = get_next_customer()
if curr_customer: start_cooking_hamburger(curr_customer)
check_hamburger_status()
def main():
print('Hamburger Shop')
start = timeit.default_timer()
run_shop()
stop = timeit.default_timer()
print(f"** Total runtime: {stop-start} seconds ***")
if __name__ == '__main__':
main()
The output of the code is as follows:
So there’s a few things happening here:
- There’s a new list called
hamburger_queue[]
which is keeping track of each hamburger that is being cooked - The event loop is the
while customer_queue or hamburger_queue
within therun_shop()
function - We have a new function called
start_cooking_hamburger()
which helps to keep track of the task to cooking starting. Why is this needed? Well in the past we would simply wait for a given task. Now, since we are doing something else while we wait, we need to remember a few things to come back to the task - We also have a new function called
check_hamburger_status()
which checks the status of each hamburger being cooked (i.e. item inhamburger_queue[]
), and if it is cooked (i.e. 2 seconds have passed), then it is considered complete
You may notice in the output that Customer 3 was in fact served before Customer 2. This is because that the execution order is not guarantee.
Error SendFox Connection:
403 Forbidden
Async Await Code Example in Python
In the previous section we created an asynchronous version manually. Here’s the same outcome but written with the async await syntax. As you’ll notice it is very similar to the original synchronous version:
import time, datetime, time
import asyncio
import time, datetime, timeit
customer_queue = [ "C1", "C2", "C3" ]
def get_next_customer():
return customer_queue.pop(0) #Get the first customer from list
async def cook_hamburger(customer):
start_customer_timer = timeit.default_timer()
print( f"[{customer}]: Start cooking hamberger for customer")
await asyncio.sleep(2) # Sleep but release control
end_customer_timer = timeit.default_timer()
print( f"[{customer}]: Finish cooking hamberger for customer. Total {end_customer_timer-start_customer_timer} seconds\n")
async def run_shop():
cooking_queue = []
while customer_queue:
curr_customer = get_next_customer()
cooking_queue.append( cook_hamburger(curr_customer) ) #this returns a task only
#cooking_queue[] has all the async tasks
await asyncio.gather( *cooking_queue ) #Run all in parallel
def main():
print('Hamburger Shop')
start = timeit.default_timer()
asyncio.run( run_shop() ) #Start the event loop
stop = timeit.default_timer()
print(f"** Total runtime: {stop-start} seconds ***")
if __name__ == '__main__':
main()
Output as follows:
Let’s walk through the code:
- Firstly, the async await is available from the library
asyncio
hence theimport asyncio
- There’s funny set of
async
keywords which precede thedef run_shop()
and thedef cook_hamburger(customer)
functions. In addition therun_shop()
is no longer called directly, instead it is called with aasyncio.run( run_shop() )
function call. So here’s what is happening:- The
asyncio.run()
function is the trigger for the so-called event loop. It continues to run forever until all the tasks given to it are completed. You must pass it a function with theasync def...
prefix hence whyrun_shop()
has the async prefix - In the
async def run_shop()
function call, the code iterates while there are customers in the queue to process, and then there’s a call tocook_hamburger(curr_customer)
for each customer. A direct call to the customer does not actually call the function but instead creates a task to execute this. That is what theasync
tells the compiler – that when called directly, return a task. - At the end of the function code in
def run_shop()
there’s a call to functionawait asyncio.gather( *cooking_queue)
. There’s a few things going on here:- The
await
keywords indicates that you need wait for the work to complete but python can do something else in the meantime - The call to
gather()
actually executes all the tasks given to it as a parameter collectively as a group and then returns the results sequentially (please note that the order of the tasks being executed may be random) - The
*customer_queue
simply expands the list into a list of parameter items. So for example ifcustomer_queue[] == [ '1', '2', '3']
then thegather( *customer_queue)
would be the same asgather( '1', '2', '3')
.
- The
- When the
await asyncio.gather( *customer_queue )
is called, theawait
keyword releases control to any activities that are pending and one of them would be to the calls to functioncook_hamburger()
which was added to thecustomer_queue
list. Hence calls tocook_hamburger()
would be triggered. - Within
cook_hamburger()
there is also anawait asyncio.sleep(2)
. This simply waits for 2 seconds, however, it does not force the program to wait for the 2 seconds to complete, instead theawait
keyword releases python to do something else in the meantime. This is similar to step 3 in Figure 2 where the chef/waiter puts the hamburger on the grill, but then doesn’t wait for the 2 second but instead does something else (i.e. serve the next customer)
- The
- The
asyncio.run()
are new keywords as part of python 3.7. In older versions of python you may see the following but it is the same as simply runningasyncio.run( run_shop() )
:loop = asyncio.get_event_loop()
loop.run_until_complete(run_shop())
loop.close()
- As you will notice, this is very similar to the synchronous code that covers Figure 1 above. This is the beauty of async/await
So remember, whenever there’s an await
then that means python pauses at that point for that task to complete but then also releases python to do something else. That’s how the performance improvement occurs. In this example, the runtime of this is 2 seconds instead of the sequential 6 seconds!
Async Asynchronous Calling Another Async Function Code Example
Suppose you want t also call another async function once your first async function is completed – how do you go about this? Remember the rule, if you want to run something asynchronously, you have to use the await
keyword, and that the function you’re calling has to be defined with async def ...
To continue with the restaurant theme, suppose that after the hamburger is cooked you ask an assistant to put the hamburger into a takeaway bag which takes 1 second. This is also another task that you need not ‘block’ and wait for it to complete. Hence, this action can be put into a function which is defined as an async
. Here’s what the code can look like:
import time, datetime, time
import asyncio
customer_queue = [ "C1", "C2", "C3" ]
def get_next_customer():
return customer_queue.pop(0) #Get the first customer from list
async def cook_hamburger(customer):
start_customer_timer = timeit.default_timer()
print( f"[{customer}]: Start cooking hamberger for customer")
await asyncio.sleep(2) # Sleep but release control
end_customer_timer = timeit.default_timer()
print( f"[{customer}]: Finish cooking hamberger for customer. Total {end_customer_timer-start_customer_timer} seconds")
await put_hamburger_in_takeaway_bag( customer )
async def put_hamburger_in_takeaway_bag( customer):
start_customer_timer = timeit.default_timer()
print( f"[{customer}]: Start packing hamberger")
await asyncio.sleep(1) # It takes 2 seconds to cook the hamburger
end_customer_timer = timeit.default_timer()
print( f"[{customer}]: Finish packing hamberger. Total {end_customer_timer-start_customer_timer} seconds\n")
async def run_shop():
cooking_queue = []
while customer_queue:
curr_customer = get_next_customer()
cooking_queue.append( cook_hamburger(curr_customer) ) #Get each of the event loops
await asyncio.gather( *cooking_queue ) #Run all in parallel
def main():
print('Hamburger Shop')
start = timeit.default_timer()
asyncio.run( run_shop() ) #Start the event loop
stop = timeit.default_timer()
print(f"** Total runtime: {stop-start} seconds ***")
if __name__ == '__main__':
main()
The output would be:
See how once the hamburger is cooked (e.g. [C1]: Finish cooking hamburger for customer. Total 2.000924572115764 seconds
), then immediately afterwards you have the [C1]: Start packing hamburger
step but also gets called asynchronously.
Async Await Real World Example With Web Crawler in Python
One difficulty in learning Async / Await is that many examples provided simply provide the asyncio.sleep()
as an example which is helpful to understand the concept, but not very helpful when you want to make something more useful. Let’s try a more complex example where you want to get some stock data from finance.yahoo.com and then, for that same stock, you also get the first 3 newspaper articles from news.google.com in the last 24 hours.
Now one thing you will realise is that await
only works with functions that are defined as async
. So you cannot call any function with await
. Why? Well recall that when you call await
you are expecting a function to return a task and not actually call the function, hence that function needs to be defined as async
in order to tell python that it returns a task to be executed at the next available time.
Let’s see the synchronous version of the code:
import asyncio, requests, timeit
from bs4 import BeautifulSoup
from pygooglenews import GoogleNews
stock_list = [ "TSLA", "AAPL"]
def get_stock_price_data(stock):
print(f"-- getting stock data for {stock}")
data = {"stock":stock, "price_open":0, "price_close":0 }
stock_page = requests.get( 'https://finance.yahoo.com/quote/' + stock, headers={'Cache-Control': 'no-cache', "Pragma": "no-cache"})
soup = BeautifulSoup(stock_page.text, 'html.parser')
#<fin-streamer active="" class="Fw(b) Fz(36px) Mb(-4px) D(ib)" data-field="regularMarketPrice" data-pricehint="2" data-symbol="TSLA" data-test="qsp-price" data-trend="none" value="759.63">759.63</fin-streamer>
data['price_close'] = soup.find('fin-streamer', attrs={"data-symbol":stock, "data-field":"regularMarketPrice"} ).text
#<td class="Ta(end) Fw(600) Lh(14px)" data-test="OPEN-value">723.25</td>
data['price_open'] = soup.find( attrs={"data-test":"OPEN-value"}).text
return data
def get_recent_news(stock):
print(f"-- getting news data for {stock}")
gn = GoogleNews()
search = gn.search(f"stocks {stock}", when = '24h')
news = search['entries'][0:3]
return news
def print_stock_update(stock, data, news):
print(f"Stock:{ stock }")
price_change = 0
if int(float(data['price_open'])) != 0: price_change = round( 100 * ( float( data['price_close'])/float(data['price_open'])-1), 2)
print(f"Open Price:{data['price_open']} Close Price:{data['price_close']} Change:{price_change}% ")
print("Latest News:")
for news_item in news:
print( f"{news_item.published}:{news_item.source.title} - {news_item.title}" )
print("\n")
def process_stocks():
for stock in stock_list:
data = get_stock_price_data( stock )
news=[]
news = get_recent_news( stock )
print_stock_update(stock, data, news)
if __name__ == '__main__':
start_timer = timeit.default_timer()
process_stocks()
end_timer = timeit.default_timer()
print(f"** Total runtime: {end_timer-start_timer} seconds ***")
Output as follows:
So what’s happening here. Well, you are looping through two stocks TSLA and AAPL, and for each stock the following happens sequentially:
- A call to
data = get_stock_price_data( stock )
occurs in order to make a call torequests.get( 'https://finance.yahoo.com/quote/' + stock)
to get the HTML page for the TSLA stock. Effectively, this page: https://finance.yahoo.com/quote/TSLA - Next we use
BeautifulSoup()
in order to find the HTML snippet that contains the stock price data for the opening price and the closing price:
- After the call to yahoo is complete, then there’s a call to
news = get_recent_news( stock )
which uses the modulepygooglenews
to get the latest google news. In fact we have used this function in our previous Twitter Bot article. - Once this is all done, that output is printed out with the call to
print_stock_update(stock, data, news)
Clearly this could be called asynchronously as we are looping each time for each stock, and then also the call to get the stock data is independent to getting the news data. However, one thing has to happen sequentially is the print_stock_update(stock, data, news)
which has to wait for both the async calls to complete.
One wait to try is to simply call the website download with:
stock_page = await requests.get( 'https://finance.yahoo.com/quote/' + stock, headers={'Cache-Control': 'no-cache', "Pragma": "no-cache"})
However, you will get the following error:
The reason is, as you may have guessed, is that the requests.get()
is not created with the async def...
construct and hence cannot be called asynchronously.
What you can do however is to use another ‘get’ web page module called httpx
. This function is defined with async def...
and can be called similar to requests. That same line would be re-written as:
import httpx
#....
async def get_stock_price_data(stock):
print(f"-- stock data:getting stock data for {stock}")
data = {"stock":stock, "price_open":0, "price_close":0 }
#*** instead of requests.get('https://finance.yahoo.com/quote/' + stock)) ****
client = httpx.AsyncClient()
stock_page = await client.get( 'https://finance.yahoo.com/quote/' + stock)
soup = BeautifulSoup(stock_page.text, 'html.parser')
#<fin-streamer active="" class="Fw(b) Fz(36px) Mb(-4px) D(ib)" data-field="regularMarketPrice" data-pricehint="2" data-symbol="TSLA" data-test="qsp-price" data-trend="none" value="759.63">759.63</fin-streamer>
data['price_close'] = soup.find('fin-streamer', attrs={"data-symbol":stock, "data-field":"regularMarketPrice"} ).text
#<td class="Ta(end) Fw(600) Lh(14px)" data-test="OPEN-value">723.25</td>
data['price_open'] = soup.find( attrs={"data-test":"OPEN-value"}).text
print(f"-- stock data:done {stock}")
return data
Ok, that works well. However, but what about the GoogleNews()
code. There is no such async
version of this function, so how can this be called asynchronously? Well for this, you can actually wrap it around a new thread. A ‘thread’ is way to run a piece of code under the same CPU process but in a parallel. It warrants a whole separate article but for now you can think of it as finding a separate space to execute this independent of the current execution path. However, to execute this in a separate thread, there’s a bit more involved.
The code looks like the following:
### Original Version
def get_recent_news(stock):
print(f"-- stock news:getting stock data for {stock}")
gn = GoogleNews()
search = gn.search(f"stocks {stock}", '24h') #Slow code to run asynchronously
news = search['entries'][0:3]
print(f"-- stock news:done {stock}")
return news
### Asynchronous Version
async def get_recent_news(stock):
print(f"-- stock news:getting stock data for {stock}")
gn = GoogleNews()
search = await asyncio.get_event_loop().run_in_executor( None, gn.search, f"stocks {stock}", '24h')
news = search['entries'][0:3]
print(f"-- stock news:done {stock}")
return news
Here what’s happening is that firstly we are using the await
keyword to call the gn.search()
function which is now being called through this asyncio.get_event_loop().run_in_executor( .. )
function call. What’s happening here is that we are asking the asyncio
module to get access to the event loop (that piece of code that continuously checks for tasks to be done) and then to run in a separate thread. The way it is called is that the parameters must be passed in separate to the function call and hence why the parameters are to be passed in after the function name itself. You will also notice that the whole function can now be defined as async def get_recent_news(stock)
How To Mix Asynchronous And Synchronous Code With Await Async in Python
Now the final problem to be solved is how do we call the two functions of get_stock_price_data( stock )
and get_recent_news(stock)
to be run asynchronously, but then wait for both to finish, and THEN run the print. This is where these steps should all be grouped under one function. This is the trick to mix asynchronous and synchronous code.
In order to run a group of tasks in parallel
as a group you use asyncio.gather()
. However, if you want to execute a synchronous function when ALL tasks that were given to asyncio.gather()
is complete, then you should wrap it in another asyncio.gather()
async def process_stock_batch(stock):
(data, news) = await asyncio.gather( get_stock_price_data( stock ), get_recent_news(stock) )
print('-- print:request printing')
print_stock_update(stock, data, news)
print('-- print:done')
async def process_stocks():
run_stock_list = []
for stock in stock_list:
run_stock_list.append( process_stock_batch(stock) )
await asyncio.gather( *run_stock_list )
Before we solve it for the real world examples, lets show a simpler example. Suppose we had the following example:
import asyncio, timeit
async def get_web_data_A(index):
await asyncio.sleep(1)
print(f"Get Web Data-A[{index}] - sleep 1 second")
async def get_web_data_B(index):
await asyncio.sleep(1)
print(f"Get Web Data-B[{index}] - sleep 1 second")
async def process(index, start_timer):
await asyncio.gather( get_web_data_A(index), get_web_data_B(index) )
print(f"Calculate [{index}] - Elapsed time:[{timeit.default_timer()-start_timer}]")
async def run_all():
start_timer = timeit.default_timer()
for index in range(0,2):
await process(index, start_timer)
if __name__ == '__main__':
asyncio.run( run_all() )
This has the following output:
What is encouraging with this code, is that even though the call to get_web_data_A()
and get_web_data_B()
both sleep for 1 second, since they were doing that asynchronously, then the total runtime is still just a little over 1 second. This can be shown by the Calculate [0]...
output. However, the problem is that the code still iterates each index sequentially, meaning, that index 0 is processed completely first, and once that’s done, then index 1 is processed. What we want instead is to run all the slow get_web_data_A()
and get_web_data_B()
first, and then run the code to calculate afterwards. This is where you need to first create the tasks for ALL the iterations, and then call gather()
on all the tasks. See the following code:
import asyncio, timeit
async def get_web_data_A(index):
await asyncio.sleep(1)
print(f"Get Web Data-A[{index}] - sleep 1 second")
async def get_web_data_B(index):
await asyncio.sleep(1)
print(f"Get Web Data-B[{index}] - sleep 1 second")
async def process(index, start_timer):
await asyncio.gather( get_web_data_A(index), get_web_data_B(index) )
print(f"Calculate [{index}] - Elapsed time:[{timeit.default_timer()-start_timer}]")
async def run_all_2():
start_timer = timeit.default_timer()
task_queue = []
for index in range(0,2):
task_queue.append( process(index, start_timer) )
await asyncio.gather( *task_queue )
if __name__ == '__main__':
asyncio.run( run_all_2() )
Here, in the function async def run_all_2()
when we loop, we do not call the blocking code await asyncio.gather...
inside the for loop
. Instead, we are adding all the tasks to call process(..)
into a list called task_queue[]
, and then at the end of the for loop
we are calling await asyncio.gather( *task_queue )
on all tasks in one go. Hence, the output is as follows:
You’ll notice that ALL the get_web_data_A()
and get_web_data_B()
are being called asynchronously, and then the calculate function is called on all the available data. Hence, the elapsed time for all the iterations is only 1 second, compared to the previous 2 seconds.
So what does this mean for our real world example for getting stock data from Yahoo and then calling Google News asynchronously, and then only printing the data once both are done? Well, the same principle applies. The code is as follows:
import asyncio, httpx, timeit
from bs4 import BeautifulSoup
from pygooglenews import GoogleNews
stock_list = [ "TSLA", "AAPL"]
async def get_stock_price_data(stock):
print(f"-- stock data:getting stock data for {stock}")
data = {"stock":stock, "price_open":0, "price_close":0 }
client = httpx.AsyncClient()
stock_page = await client.get( 'https://finance.yahoo.com/quote/' + stock)
soup = BeautifulSoup(stock_page.text, 'html.parser')
#<fin-streamer active="" class="Fw(b) Fz(36px) Mb(-4px) D(ib)" data-field="regularMarketPrice" data-pricehint="2" data-symbol="TSLA" data-test="qsp-price" data-trend="none" value="759.63">759.63</fin-streamer>
data['price_close'] = soup.find('fin-streamer', attrs={"data-symbol":stock, "data-field":"regularMarketPrice"} ).text
#<td class="Ta(end) Fw(600) Lh(14px)" data-test="OPEN-value">723.25</td>
data['price_open'] = soup.find( attrs={"data-test":"OPEN-value"}).text
print(f"-- stock data:done {stock}")
return data
async def get_recent_news(stock):
print(f"-- stock news:getting stock data for {stock}")
gn = GoogleNews()
search = await asyncio.get_event_loop().run_in_executor( None, gn.search, f"stocks {stock}", '24h')
news = search['entries'][0:3]
print(f"-- stock news:done {stock}")
return news
def print_stock_update(stock, data, news):
print('-- print:starting print')
print(f"Stock:{ stock }")
price_change = 0
if int(float(data['price_open'])) != 0: price_change = round( 100 * ( float( data['price_close'])/float(data['price_open'])-1), 2)
print(f"Open Price:{data['price_open']} Close Price:{data['price_close']} Change:{price_change}% ")
print("Latest News:")
for news_item in news:
print( f"{news_item.published}:{news_item.source.title} - {news_item.title}" )
print("\n")
async def process_stock_batch(stock):
(data, news) = await asyncio.gather( get_stock_price_data( stock ), get_recent_news(stock) )
print('-- print:request printing')
print_stock_update(stock, data, news)
print('-- print:done')
async def process_stocks():
run_stock_list = []
for stock in stock_list:
run_stock_list.append( process_stock_batch(stock) )
await asyncio.gather( *run_stock_list )
if __name__ == '__main__':
start_timer = timeit.default_timer()
asyncio.run( process_stocks() )
end_timer = timeit.default_timer()
print(f"** Total runtime: {end_timer-start_timer} seconds ***")
The key bit of code is in the async def process_stocks()
which now iterates over each of the stocks, creates tasks, and then calls await asyncio.gather( *run_stock_list )
on all the stocks in one go, and then in the function process_stock_batch(stock)
we have the asynchronous call to (data, news) = await asyncio.gather( get_stock_price_data( stock )
, and then the synchronous call to print_stock_update(stock, data, news)
once both web data is complete.
Conclusion
The await and async function is an incredibly useful feature of python which takes a bit of getting used to in order to understand the concept, but once you’ve got the hang of it, it can be incredibly useful to get an improve of the performance of your code by leveraging idle time where you are waiting for a task to complete. Remember to be sure about the sequencing and being mindful of whether you care to have a follow-up activity once that task is completed, or you can simply continue to execute.
This not easy to grasp as a beginner, but follow the example code above, and if you get stuck feel free to reach out through our email list below.
Error SendFox Connection: