Beginner
Twitter Bots can be super useful to help automate some of the interactions on social media in order to build and grow engagement but also automate some tasks. There has been many changes on the twitter developer account and sometimes it’s uncertain how to even create a tweet bot. This article will walk through step bey step on how to create a twitter bot with the latest Twitter API v2 and also provide some code you can copy and paste in your next project. We also end with how to create a more useful bot that can post some articles about python automatically.
In a nutshell, how a twitter bot works is that you will need to run your code for a twitter bot in your own compute that can be triggered from a Twitter webhook (not covered) which is called by twitter based on a given event, or by having your program run periodically to read and send tweets (covered in this article). Either way, there are some commonalities and in this article we will walk through how to read tweets, and then to send tweets which are from google news related to python!
Step 1: Sign up for Developer program
If you haven’t already you will need to either sign in or sign up for a twitter account through twitter.com. Make sure your twitter account has an email address allocated to it (if you’re not aware, you can create a twitter account with just your mobile phone number)

Next go to developer.twitter.com and sign up for the developer program (yes, you need to sign up for a second time). This enables you to create applications.

First you’ll need to answer some questions on purpose of the developer account. You can chose “Make a Bot”

Next you will need to agree to the terms and conditions, and then a verification email will be sent to your email address from your twitter account.
When you click on the email to verify your account, you can then enter your app name. This is an internal name and something that will make it easy for you to reference.

Once you click on keys, you will then be given a set of security token keys like below. Please copy them in a safe place as your python code will need to use them to access your specific bot. If you do lose your keys, or someone gets access to them for some reason, you can generate new keys from your developer.twitter.com console.
There are two keys which you will need to use:
- API Key (think of this like a username)
- API Key Secret (think of this like a password)
- Bearer Token (used for read queries such as getting latest tweets)
There is also a third key, a Bearer Token, but this you can ignore. It is for certain types of requests

At the bottom of the screen you’ll see a “Skip to Dashboard”, when you click on that you’ll then see the overview of your API metrics.
Within this screen you can see the limits of the number of calls per month for example and how much you have already consumed.

Next, click on the project and we have to generate the access tokens. Currently with the previous keys you can only read tweets, you cannot create ones as yet.
After clicking on the project, chose the “keys and tokens” tab and at the bottom you can generate the “Access Tokens”. In this screen you can also re-generate the API Keys and Bearer Token you just created before in case your keys were compromised or you forgot them.

Just like before, generate the keys and copy them.

By now, you have 5 security toknes:
- API Key – also known as the Consumer Key (think of this like a username)
- API Key Secret – also known as the Consumer Secret (think of this like a password)
- Bearer Token (used for read queries such as getting latest tweets)
- Access Token (‘username’ to allow you to create tweets)
- Access Token Secret (‘password’ to allow you to create tweets)
Step 2: Test your twitter API query
Now that you have the API keys, you can do some tests. If you are using a linux based machine you can use the curl command to do a query. Otherwise, you can use a site such as https://reqbin.com/curl to do an online curl request.
Here’s a simple example to get the most recent tweets. It uses the API https://api.twitter.com/2/tweets/search/recent which must include the query keyword which includes a range of parameter options (find out the list in the twitter query documentation).
curl --request GET 'https://api.twitter.com/2/tweets/search/recent?query=from:pythonhowtocode' --header 'Authorization: Bearer <your bearer token from step 1>'
The output is as follows:
{
"data": [{
"id": "1523251860110405633",
"text": "See our latest article on THE complete beginner guide on creating a #discord #bot in #python \n\nEasily add this to your #100DaysOfCode #100daysofcodechallenge #100daysofpython \n\nhttps://t.co/4WKvDVh1g9"
}],
"meta": {
"newest_id": "1523251860110405633",
"oldest_id": "1523251860110405633",
"result_count": 1
}
}
Here’s a much more complex example. This includes the following parameters:
%23– which is the escape characters for#and searches for hashtags. Below example is hashtag#python(case insensitive)%20– this is an escape character for a space and separates different filters with anANDoperation-is:retweet– this excludes retweets. The ‘-‘ sign preceding theisnegates the actual filter-is:reply– this excludes replies. The ‘-‘ sign preceding theisnegates the actual filtermax_results=20– an integer that defines the maximum number of return results and in this case 20 resultsexpansions=author_id– this makes sure to include the username internal twitter id and also the actual username under anincludessection at the bottom of the returned JSONtweet.fields=public_metrics,created_at– returns the interaction metrics such as number of likes, number of retweets, etc as well as the time (in GMT timezone) when the tweet was createduser.fields=created_at,location– this returns when the user account was created and the user self-reported location in their profile.
curl --request GET 'https://api.twitter.com/2/tweets/search/recent?query=%23python%20-is:retweet%20-is:reply&max_results=20&expansions=author_id&tweet.fields=public_metrics,created_at&user.fields=created_at,location' --header 'Authorization: Bearer <Your Bearer Token from Step 1>'
Result of this looks like the following – notice that the username details is in the includes section below where you can link the tweet with the username with the author_id field.
{{
"data": [{
"id": "1523688996676812800",
"text": "NEED a #JOB?\nSign up now https://t.co/o7lVlsl75X\nFREE. NO MIDDLEMEN\n#Jobs #AI #DataAnalytics #MachineLearning #Python #JavaScript #WomenWhoCode #Programming #Coding #100DaysofCode #DEVCommunity #gamedev #gamedevelopment #indiedev #IndieGameDev #Mobile #gamers #RHOP #BTC #ETH #SOL https://t.co/kMYD2417jR",
"author_id": "1332714745871421443",
"public_metrics": {
"retweet_count": 3,
"reply_count": 0,
"like_count": 0,
"quote_count": 0
},
"created_at": "2022-05-09T15:39:00.000Z"
},
....
}],
"includes": {
"users": [{
"name": "Job Preference",
"id": "1332714745871421443",
"username": "JobPreference",
"created_at": "2020-11-28T15:56:01.000Z"
},
....
}
Step 3: Reading tweets with python code
Building on top of the tests conducted on Step 2, it is a simple extra step in order to convert this to python code using the requests module which we’ll show first and after show a simpler way with the library tweepy. You can simply use the library to convert the curl command into a bit of python code. Here’s a structured version of this code where the logic is encapsulated in a class.
import requests, json
from urllib.parse import quote
from pprint import pprint
class TwitterBot():
URL_SEARCH_RECENT = 'https://api.twitter.com/2/tweets/search/recent'
def __init__(self, bearer_key):
self.bearer_key = bearer_key
def search_recent(self, query, include_retweets=False, include_replies=False):
url = self.URL_SEARCH_RECENT + "?query=" + quote(query)
if not include_retweets: url += quote(' ')+'-is:retweet'
if not include_replies: url += quote(' ')+'-is:reply'
url += '&max_results=20&expansions=author_id&tweet.fields=public_metrics,created_at&user.fields=created_at,location'
headers = {'Authorization': 'Bearer ' + self.bearer_key }
r = requests.get(url, headers = headers)
r.encoding = r.apparent_encoding. #Ensure to use UTF-8 if unicode characters
return json.loads(r.text)
#create an instance and pass in your Bearer Token
t = TwitterBot('<Insert your Bearer Token from Step 1>')
pprint( t.search_recent( '#python') )
The above code is fairly straightforward and does the following:
TwitterBot class– this class encapsulates the logic to send the API requestsTwitterBot.search_recent– this method takes in the query string, then escapes any special characters, then calls therequests.get()to call thehttps://api.twitter.com/2/tweets/search/recentAPI callpprint()– this simply prints the output in a more readable format
This is the output:


However, there is a simpler way which is to use tweepy.
pip install tweepy
Next you can use the tweepy module to search recent tweets:
import tweepy
client = tweepy.Client(bearer_token='<insert your token here from previous step>')
query = '#python -is:retweet -is:reply' #exclude retweets and replies with '-'
tweets = client.search_recent_tweets( query=query,
tweet_fields=['public_metrics', 'context_annotations', 'created_at'],
user_fields=['username','created_at','location'],
expansions=['entities.mentions.username','author_id'],
max_results=10)
#The details of the users is in the 'includes' list
user_data = {}
for raw_user in tweets.includes['users']:
user_data[ raw_user.id ] = raw_user
for index, tweet in enumerate(tweets.data):
print(f"[{index}]::@{user_data[tweet.author_id]['username']}::{tweet.created_at}::{tweet.text.strip()}\n")
print("------------------------------------------------------------------------------")
Output as follows:

Please note, that after calling the API a few times your number of tweets consumed will have increased and may have hit the limit. You can always visit the dashboard at https://developer.twitter.com/en/portal/dashboard to see how many requests have been consumed. Notice, that this does not count the number of actual API calls but the actual number of tweets. So it can get consumed pretty quickly.

Step 4: Sending out a tweet
So far we’ve only been reading tweets. In order to send a tweet you can use the create_tweet() function of tweepy.
client = tweepy.Client( consumer_key= "<API key from above - see step 1>",
consumer_secret= "<API Key secret - see step 1>",
access_token= "<Access Token - see step 1>",
access_token_secret= "<Access Token Secret - see step 1>")
# Replace the text with whatever you want to Tweet about
response = client.create_tweet(text='A little girl walks into a pet shop and asks for a bunny. The worker says” the fluffy white one or the fluffy brown one”? The girl then says, I don’t think my python really cares.')
print(response)
Output from Console:

Output from Twitter:

How to Send Automated Tweets About the Latest News
To make this a bit more of a useful bot rather than simply tweet out static text, we’ll make it tweet about the latest things happened in the news about python.
In order to search for news information, you can use the python library pygooglenews
pip install pygooglenews
The library searches Google news RSS feed and was developed by Artem Bugara. You can see the full article of he developed the Google News library. You can put in a keyword and also time horizon to make it work. Here’s an example to find the latest python articles in last 24 hours.
from pygooglenews import GoogleNews
gn = GoogleNews()
search = gn.search('python programming', when = '12h')
for article in search['entries']:
print(article.title)
print(article.published)
print(article.source.title)
print('-'*80) #string multiplier - show '-' 80 times
Here’s the output:
So, the idea would be to show a random article on the twitter bot which is related to python programming. The gn.search() functions returns a list of all the articles under the entries dictionary item which has a list of those articles. We will simply pick a random one and construct the tweet with the article title and the link to the article.
import tweepy
from pygooglenews import GoogleNews
from random import randint
client = tweepy.Client( consumer_key= "<your consumer/API key - see step 1>",
consumer_secret= "<your consumer/API secret - see step 1>",
access_token= "<your access token key - see step 1>",
access_token_secret= "<your access token secret - see step 1>")
gn = GoogleNews()
search = gn.search('python programming', when = '24h')
#Find random article in last 24 hours using randint between index 0 and the last index
article = search['entries'][ randint( 0, len( search['entries'])-1 ) ]
#construct the tweet text
tweet_text = f"In python news: {article.title}. See full article: {article.link}. #python #pythonprogramming"
#Fire off the tweet!
response = client.create_tweet( tweet_text )
print(response)
Output from the console on the return result:

And, most importantly, here’s the tweet from our @pythonhowtocode! Twitter automatically pulled the article image

This has currently been scheduled as a daily background job!
How To Use Python msgspec for Fast JSON Serialization
Intermediate
JSON serialization is in the hot path of almost every Python web service: every API response encodes data to JSON, every incoming request decodes it. If your service handles thousands of requests per second, JSON encoding time adds up fast. Python’s built-in json module is correct and convenient, but it was not built for speed — and libraries like orjson are fast but handle only basic Python types. What if you want both speed and type safety?
msgspec is a high-performance serialization library that encodes and decodes JSON (and MessagePack) 5-10x faster than the standard library while also providing automatic type validation. You define your data shape once using msgspec.Struct — like a Pydantic model but with a smaller memory footprint and faster instantiation — and msgspec handles encoding, decoding, and validation in a single C-extension call.
This article covers installing msgspec, defining Struct classes, encoding and decoding JSON, using type annotations for automatic validation, handling optional and nested fields, working with MessagePack, and benchmarking against the standard library. By the end you will have a complete toolkit for high-performance, type-safe JSON serialization in Python.
msgspec Quick Example
# quick_msgspec.py
import msgspec
import msgspec.json
class User(msgspec.Struct):
name: str
email: str
age: int
is_active: bool = True
# Encode to JSON bytes
user = User(name="Alice", email="alice@example.com", age=30)
encoded = msgspec.json.encode(user)
print(encoded)
print(type(encoded))
# Decode from JSON bytes -- returns a User instance with validation
raw = b'{"name":"Bob","email":"bob@example.com","age":25}'
decoded = msgspec.json.decode(raw, type=User)
print(decoded)
print(type(decoded))
b'{"name":"Alice","email":"alice@example.com","age":30,"is_active":true}'
<class 'bytes'>
User(name='Bob', email='bob@example.com', age=25, is_active=True)
<class '__main__.User'>
The encode() function serializes a Struct (or any supported Python type) to JSON bytes. The decode() function deserializes JSON bytes and validates them against a type — if the JSON does not match the expected shape, a ValidationError is raised. Unlike the stdlib json module, the output is bytes, not str, which is what HTTP servers and most network libraries expect anyway.
What Is msgspec and When Should You Use It?
msgspec is a C-extension library by Jim Crist-Harif that provides both serialization performance and type safety. It is designed as a faster, lower-footprint alternative to Pydantic for use cases where you need validated deserialization at high throughput.
| Library | JSON Speed | Validation | Memory | Type |
|---|---|---|---|---|
| json (stdlib) | Baseline | No | Low | dict/list |
| orjson | 5-10x faster | No | Low | dict/list |
| msgspec | 5-10x faster | Yes | Very low | Struct |
| Pydantic v2 | 2-3x faster | Yes (rich) | Higher | BaseModel |
Use msgspec when you need high-throughput JSON encoding/decoding with type safety and low memory usage — particularly in web APIs (FastAPI, Starlette), event streaming, and data pipeline code where JSON parsing is in the hot path. Use Pydantic when you need rich validators, field aliases, custom serializers, or deep ecosystem integration. Use orjson when you need maximum speed with no schema requirements.
Installation
pip install msgspec
Successfully installed msgspec-0.18.6
msgspec ships as a pre-compiled C extension for Linux, macOS, and Windows on both CPython and PyPy. Import with import msgspec for Struct definitions and import msgspec.json for JSON operations.
Defining Structs
A msgspec.Struct is a fast, memory-efficient data class. It uses Python type annotations to define its fields and generates optimized __init__, __repr__, __eq__, and encoder/decoder hooks automatically.
# struct_basics.py
import msgspec
from typing import Optional, List
from datetime import datetime
class Address(msgspec.Struct):
street: str
city: str
country: str
postal_code: str = "" # Default value
class Order(msgspec.Struct):
order_id: str
customer_name: str
items: List[str]
total: float
shipping_address: Address # Nested Struct
created_at: datetime # datetime is supported natively
notes: Optional[str] = None # Optional field
# Create instances
addr = Address(street="123 Main St", city="Austin", country="US", postal_code="78701")
order = Order(
order_id="ORD-001",
customer_name="Alice Smith",
items=["Widget A", "Gadget B"],
total=89.99,
shipping_address=addr,
created_at=datetime(2026, 4, 30, 10, 0, 0),
)
print(order)
print()
print(f"Order ID: {order.order_id}")
print(f"City: {order.shipping_address.city}")
print(f"Notes: {order.notes}") # None -- the default
Order(order_id='ORD-001', customer_name='Alice Smith', items=['Widget A', 'Gadget B'], total=89.99, shipping_address=Address(street='123 Main St', city='Austin', country='US', postal_code='78701'), created_at=datetime.datetime(2026, 4, 30, 10, 0), notes=None)
Order ID: ORD-001
City: Austin
Notes: None
Structs are immutable by default (frozen). They have no __dict__, which makes them significantly more memory-efficient than regular Python objects or dataclasses. Access fields as attributes. Nested Structs, lists, dicts, and Python’s standard types (datetime, UUID, Decimal) are all supported as field types.
Encoding: Struct to JSON
Use msgspec.json.encode() to serialize any Struct or supported Python value to JSON bytes:
# encoding.py
import msgspec
import msgspec.json
from datetime import datetime
class Product(msgspec.Struct):
id: int
name: str
price: float
in_stock: bool
tags: list
product = Product(id=1, name="Widget Pro", price=29.99,
in_stock=True, tags=["electronics", "gadgets"])
# encode returns bytes
json_bytes = msgspec.json.encode(product)
print(json_bytes)
print(type(json_bytes))
# Decode bytes to string if needed
json_str = json_bytes.decode("utf-8")
print(json_str)
# Encode a plain Python dict (msgspec handles these too)
data = {"key": "value", "num": 42, "flag": True}
print(msgspec.json.encode(data))
# Encode a list of Structs
products = [
Product(id=i, name=f"Product {i}", price=9.99 * i,
in_stock=i % 2 == 0, tags=[])
for i in range(1, 4)
]
print(msgspec.json.encode(products))
b'{"id":1,"name":"Widget Pro","price":29.99,"in_stock":true,"tags":["electronics","gadgets"]}'
<class 'bytes'>
{"id":1,"name":"Widget Pro","price":29.99,"in_stock":true,"tags":["electronics","gadgets"]}
b'{"key":"value","num":42,"flag":true}'
b'[{"id":1,...},{"id":2,...},{"id":3,...}]'
Field names in the JSON output match the Struct attribute names exactly. If you need different JSON field names (e.g., camelCase in JSON, snake_case in Python), use msgspec.Struct with the rename option or rename individual fields with msgspec.field(name="camelCaseName").
Decoding with Validation
The decode() function deserializes JSON and validates it against your type annotation simultaneously. Invalid data raises a descriptive msgspec.ValidationError:
# decoding_validation.py
import msgspec
import msgspec.json
from typing import Optional, List
class UserProfile(msgspec.Struct):
user_id: int
username: str
email: str
score: float
tags: List[str]
bio: Optional[str] = None
# Decode valid JSON
valid_json = b'''
{
"user_id": 42,
"username": "alice",
"email": "alice@example.com",
"score": 9.5,
"tags": ["python", "developer"],
"bio": "Python enthusiast"
}
'''
profile = msgspec.json.decode(valid_json, type=UserProfile)
print(f"User: {profile.username}, Score: {profile.score}")
print(f"Tags: {profile.tags}")
print()
# Decode with missing optional field -- OK
minimal = b'{"user_id":1,"username":"bob","email":"b@b.com","score":7.0,"tags":[]}'
p2 = msgspec.json.decode(minimal, type=UserProfile)
print(f"Bio (optional): {p2.bio}") # None
print()
# Decode with wrong type -- raises ValidationError
try:
bad = b'{"user_id":"not_an_int","username":"x","email":"x@x.com","score":1.0,"tags":[]}'
msgspec.json.decode(bad, type=UserProfile)
except msgspec.ValidationError as e:
print(f"Validation error: {e}")
# Decode with missing required field -- raises ValidationError
try:
missing = b'{"user_id":1,"username":"alice"}'
msgspec.json.decode(missing, type=UserProfile)
except msgspec.ValidationError as e:
print(f"Missing field error: {e}")
User: alice, Score: 9.5
Tags: ['python', 'developer']
Bio (optional): None
Validation error: Expected `int`, got `str` - at `$.user_id`
Missing field error: Object missing required field `email` - at `$`
The validation error messages include a JSON path ($.user_id) that tells you exactly which field failed and why. This replaces the typical pattern of calling json.loads() and then manually checking types — msgspec does both in one step, at C speed. The path notation ($ for the root, $.field for a specific field, $.list[0] for a list element) makes debugging API input validation errors straightforward.
Reusable Encoder and Decoder Objects
For high-throughput code, create Encoder and Decoder objects once and reuse them. This avoids per-call setup overhead and enables encoder customization:
# encoder_decoder.py
import msgspec
import msgspec.json
class Event(msgspec.Struct):
event_type: str
payload: dict
timestamp: float
# Create once at module level -- reuse for every request
encoder = msgspec.json.Encoder()
decoder = msgspec.json.Decoder(Event)
# Encode
event = Event(event_type="user_login", payload={"user_id": 42}, timestamp=1714483200.0)
data = encoder.encode(event)
print(data)
# Decode
raw = b'{"event_type":"purchase","payload":{"order_id":"123"},"timestamp":1714483260.0}'
decoded = decoder.decode(raw)
print(decoded)
print(f"Type: {decoded.event_type}, Order: {decoded.payload.get('order_id')}")
b'{"event_type":"user_login","payload":{"user_id":42},"timestamp":1714483200.0}'
Event(event_type='purchase', payload={'order_id': '123'}, timestamp=1714483260.0)
Type: purchase, Order: 123
Reusable encoder/decoder objects are the pattern to use in web servers (FastAPI, Flask) where the same type is encoded or decoded on every request. Create them at module level (outside of request handlers) so the per-type setup cost is paid once at startup, not on every request.
Real-Life Example: FastAPI Response Serialization
Here is how to use msgspec for high-performance JSON responses in a FastAPI application:
# fastapi_msgspec.py
# Install: pip install fastapi uvicorn msgspec
from fastapi import FastAPI
from fastapi.responses import Response
import msgspec
import msgspec.json
from typing import List
from datetime import datetime
app = FastAPI()
class Product(msgspec.Struct):
id: int
name: str
price: float
in_stock: bool
updated_at: datetime
# Prebuilt encoder -- created once at module load time
encoder = msgspec.json.Encoder()
# Sample data (in a real app, this comes from a database)
PRODUCTS = [
Product(id=i, name=f"Product {i}", price=round(9.99 * i, 2),
in_stock=i % 3 != 0, updated_at=datetime(2026, 4, 30))
for i in range(1, 101)
]
@app.get("/products")
def list_products() -> Response:
"""Return all products as JSON -- using msgspec for fast encoding."""
return Response(
content=encoder.encode(PRODUCTS),
media_type="application/json"
)
@app.get("/products/{product_id}")
def get_product(product_id: int) -> Response:
"""Return a single product by ID."""
product = next((p for p in PRODUCTS if p.id == product_id), None)
if product is None:
return Response(content=b'{"error":"not found"}',
media_type="application/json", status_code=404)
return Response(
content=encoder.encode(product),
media_type="application/json"
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
# Run with: python fastapi_msgspec.py
# GET /products returns 100 products as JSON
# GET /products/1 returns a single product
By bypassing FastAPI’s built-in JSON serialization (which uses the stdlib json module via Pydantic) and using a msgspec Encoder directly, you get 5-10x faster serialization for response bodies. The key pattern: return fastapi.responses.Response with pre-encoded bytes instead of returning a Python dict and letting FastAPI encode it. This is the approach used by high-traffic Python APIs to shave milliseconds off every response.
Frequently Asked Questions
When should I use msgspec instead of Pydantic?
Use msgspec when performance is the primary concern and you do not need Pydantic’s ecosystem features: validators on individual fields, aliases, computed fields, custom serializers, or deep framework integration (Django, SQLAlchemy). msgspec Structs are faster to create, use less memory, and encode/decode faster than Pydantic models. Use Pydantic when you need its rich validation API or when your framework requires it (FastAPI’s dependency injection uses Pydantic models natively).
What is MessagePack and when should I use it instead of JSON?
MessagePack is a binary serialization format that is more compact and faster to parse than JSON. Use msgspec.msgpack.encode() and msgspec.msgpack.decode() for internal service-to-service communication where both sides speak Python. MessagePack is about 30% smaller than equivalent JSON for typical data and faster to encode/decode. Stick with JSON when the data needs to be human-readable, logged, or consumed by a non-msgspec client.
How do I make a Struct mutable?
Pass frozen=False to the Struct class: class MyStruct(msgspec.Struct, frozen=False). By default, Structs are frozen (immutable). Mutable Structs allow field assignment (obj.field = value) and can be used where you need to update individual fields after construction. Frozen Structs are hashable (can be used as dict keys or in sets); mutable Structs are not.
Does msgspec support Struct inheritance?
Yes, with limitations. A Struct subclass inherits all parent fields. However, a subclass cannot override parent fields or change their types. Struct inheritance is useful for adding fields to a base type without redefining everything, but it is not as flexible as Python class inheritance. For polymorphic types, use typing.Union with a tag field to distinguish subtypes during decoding.
msgspec encodes to bytes, but I need a str. How do I convert?
Call .decode() on the bytes: json_str = msgspec.json.encode(obj).decode("utf-8"). This adds one small allocation, but the total is still faster than json.dumps(). Most HTTP servers (WSGI/ASGI) accept bytes directly in the response body, so the conversion is often unnecessary in practice.
Conclusion
msgspec gives you the best of both worlds: the speed of a C-extension JSON library and the type safety of a schema validation library. You have seen how to define Struct classes with type annotations, encode and decode JSON with automatic validation, use reusable Encoder/Decoder objects for high-throughput code, handle optional and nested fields, and wire msgspec into FastAPI for fast API responses. The official documentation at jcristharif.com/msgspec covers advanced topics including custom hooks, YAML/TOML support, and the full type system.
Related Articles
Further Reading: For more details, see the Python HTTP client documentation.
Pro Tips for Building a Better Twitter Bot
1. Respect Rate Limits with Exponential Backoff
The Twitter API enforces strict rate limits. Instead of crashing when you hit one, implement exponential backoff to retry gracefully. Wrap your API calls in a retry function that doubles the wait time after each failed attempt, starting from 1 second up to a maximum of 64 seconds. This keeps your bot running reliably without getting your credentials revoked.
# rate_limit_handler.py
import time
import requests
def api_call_with_backoff(url, headers, max_retries=5):
wait_time = 1
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.json()
elif response.status_code == 429:
print(f"Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
wait_time = min(wait_time * 2, 64)
else:
response.raise_for_status()
raise Exception("Max retries exceeded")
Output:
Rate limited. Waiting 1s...
Rate limited. Waiting 2s...
{'data': [{'id': '1234567890', 'text': 'Hello world'}]}
2. Never Hardcode API Keys
Store your API credentials in environment variables or a .env file, never in your source code. If you accidentally push hardcoded keys to a public GitHub repo, bots will find and abuse them within minutes. Use the python-dotenv library to load credentials from a .env file that you add to your .gitignore.
# secure_credentials.py
import os
from dotenv import load_dotenv
load_dotenv()
BEARER_TOKEN = os.getenv("TWITTER_BEARER_TOKEN")
API_KEY = os.getenv("TWITTER_API_KEY")
API_SECRET = os.getenv("TWITTER_API_SECRET")
if not BEARER_TOKEN:
raise ValueError("TWITTER_BEARER_TOKEN not set in .env file")
3. Add Logging Instead of Print Statements
Replace print() calls with Python’s built-in logging module. Logging gives you timestamps, severity levels, and the ability to write to files — essential for debugging a bot that runs unattended. When your bot tweets something unexpected at 3 AM, logs are the only way to figure out what happened.
# bot_with_logging.py
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("bot.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
logger.info("Bot started successfully")
logger.warning("Approaching rate limit: 14/15 requests used")
logger.error("Failed to post tweet: 403 Forbidden")
Output:
2026-03-26 10:15:30 [INFO] Bot started successfully
2026-03-26 10:15:31 [WARNING] Approaching rate limit: 14/15 requests used
2026-03-26 10:15:32 [ERROR] Failed to post tweet: 403 Forbidden
4. Track Posted Content to Avoid Duplicates
Bots that post the same content repeatedly get flagged and suspended. Keep a simple record of what you have already tweeted using a JSON file or SQLite database. Before posting, check if the content has been posted before. This is especially important for news bots that might encounter the same story from multiple sources.
5. Use a Scheduler for Consistent Posting
Instead of running your bot in a loop with time.sleep(), use a proper scheduler like schedule or APScheduler. Schedulers handle timing more reliably, support cron-like expressions, and make it easy to run different tasks at different intervals. For production bots, consider using system-level scheduling with cron (Linux) or Task Scheduler (Windows).
Frequently Asked Questions
Can I still build a Twitter bot with the API?
Yes, but access has changed. The free tier of the X (formerly Twitter) API v2 allows basic posting. For reading tweets or higher volume, you need a paid plan. Check current pricing at developer.x.com.
What Python library should I use for the Twitter/X API?
Use tweepy for the most mature Python wrapper with v2 API support. It handles OAuth 2.0 authentication, rate limiting, and provides clean methods for posting, searching, and streaming.
How do I authenticate with the Twitter API v2?
Use OAuth 2.0 Bearer Token for read-only access or OAuth 1.0a for posting. Generate credentials in the X Developer Portal, then pass them to tweepy.Client().
What are the rate limits for the Twitter API?
Rate limits vary by endpoint and plan. The free tier allows 1,500 tweets per month. Always implement rate limit handling with tweepy’s wait_on_rate_limit=True.
What can a Twitter bot do?
Bots can auto-post content, reply to mentions, retweet by keyword, track hashtags, analyze sentiment, and provide automated responses. Always follow the X API terms of service.
Hey,
Thank you so much! I have tried sample codes from other tutorials, including twitter API documentation and none of that really worked. Your code works nice, thank you really.
David
Thanks for the feedback, glad it was helpful.