APIs (Application Programming Interfaces) are the bridges that allow different software systems to communicate with each other.
pip install requests
import requests
import json
url = "https://reqres.in/api/users?page=2"
headers = {
"Accept": "application/json", # Specify the type of response you want
"User-Agent": "MyApp/1.0" # Identify your application
}
try:
response = requests.get(url, headers=headers, timeout=10)
# Raise an error for bad responses
response.raise_for_status()
data = response.json() # Parse JSON response
d1 = data # Extract the 'data' field from the JSON response
print(d1) # Print the extracted data
# Pretty print the JSON response
print(json.dumps(data, indent=4))
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
# The above code sends a GET request to the specified URL, retrieves the response, and prints the data in a formatted manner.
# It also handles any potential errors that may occur during the request.

import requests
payload = {"name": "test", "salary": "123", "age": "23"}
headers = {
'User-Agent': 'nginx/1.21.6',
'Accept': '*/*',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Content-Type': 'application/json'
}
try:
# Make the POST request
response = requests.post('https://dummy.restapiexample.com/api/v1/create', json=payload, headers=headers)
# Raise an error for bad responses
response.raise_for_status()
# If the request is successful, parse the JSON response
data = response.json()
# Print the response data
print("Resource created successfully!")
print(data)
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
Understanding Custom Headers
Custom headers can be crucial when interacting with APIs, especially those requiring authentication or specific content types. Let’s break down the headers used in the POST request:
- User-Agent: Identifies the client software making the request. This can be important for compatibility and analytics purposes.
- Accept: Tells the server what media types the client can handle. Here,
*/*indicates that any media type is acceptable. - Cache-Control: Manages caching behavior.
no-cachemeans that the client wants a fresh copy of the resource. - Connection:
keep-alivekeeps the connection open, allowing for more efficient communication. - Content-Type: Indicates the format of the data being sent. In our example, it’s JSON.
HTTP response status codes:
| Code | Status | Description |
|---|---|---|
| 1xx | Informational | |
| 100 | Continue | The server has received the request headers and the client should proceed to send the request body. |
| 101 | Switching Protocols | The requester has asked the server to switch protocols, and the server is acknowledging that it will do so. |
| 2xx | Success | |
| 200 | OK | The request was successful, and the server returned the requested resource. |
| 201 | Created | The request was successful, and a new resource was created as a result. |
| 202 | Accepted | The request has been accepted for processing, but the processing is not yet complete. |
| 204 | No Content | The request was successful, but there is no content to send in the response. |
| 3xx | Redirection | |
| 301 | Moved Permanently | The requested resource has been permanently moved to a new URL. |
| 302 | Found | The requested resource is temporarily located at a different URL. |
| 304 | Not Modified | The resource has not been modified since the last request, so the client can use the cached version. |
| 4xx | Client Errors | |
| 400 | Bad Request | The server could not understand the request due to invalid syntax. |
| 401 | Unauthorized | The client must authenticate itself to get the requested response. |
| 403 | Forbidden | The client does not have permission to access the requested resource. |
| 404 | Not Found | The server could not find the requested resource. |
| 405 | Method Not Allowed | The request method is not supported for the requested resource. |
| 408 | Request Timeout | The server timed out waiting for the request. |
| 409 | Conflict | The request could not be processed because of a conflict in the request, such as an edit conflict. |
| 429 | Too Many Requests | The user has sent too many requests in a given amount of time ("rate limiting"). |
| 5xx | Server Errors | |
| 500 | Internal Server Error | The server encountered an error and could not complete the request. |
| 501 | Not Implemented | The server does not support the functionality required to fulfill the request. |
| 502 | Bad Gateway | The server received an invalid response from the upstream server. |
| 503 | Service Unavailable | The server is not ready to handle the request, usually due to maintenance or overload. |
| 504 | Gateway Timeout | The server, while acting as a gateway, did not receive a timely response from the upstream server. |
This table summarizes the most common HTTP status codes and their meanings.
import json
def jprint(obj):
text = json.dumps(obj, sort_keys=False, indent=2)
print(text)
json.dumps() Function: This function converts a Python object (obj) into a JSON-formatted string.
sort_keys=False: This argument controls whether the keys in the JSON output should be sorted. By setting it to False, the keys will appear in the order they were inserted into the original Python object.
Useful exceptions error
except requests.exceptions.Timeout:
print("The request timed out. Please try again later.")
except requests.exceptions.TooManyRedirects:
print("Too many redirects. Check the URL and try again.")
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
except json.JSONDecodeError:
print("Failed to decode JSON response. Please check the server response.")
except requests.exceptions.HTTPError as err:
if err.response.status_code == 400:
print("Bad Request: The server could not understand the request.")
elif err.response.status_code == 401:
print("Unauthorized: Access is denied due to invalid credentials.")
elif err.response.status_code == 403:
print("Forbidden: You do not have permission to access this resource.")
elif err.response.status_code == 404:
print("Not Found: The requested resource could not be found.")
elif err.response.status_code == 500:
print("Internal Server Error: The server encountered an error and could not complete your request.")
else:
print(f"HTTP error occurred: {err}")
except requests.exceptions.ConnectionError:
print("Connection error: Please check your internet connection.")
except Exception as e:
print(f"An unexpected error occurred: {e}")
urllib library
Python की urllib लाइब्रेरी एक built-in (इन-बिल्ट) लाइब्रेरी है जो web से data को access करने के लिए इस्तेमाल होती है। इसे आमतौर पर HTTP requests (जैसे GET और POST) भेजने, URL parsing, और web resources को download करने के लिए उपयोग किया जाता है।
🔧 Installation (इंस्टॉलेशन)
urllib Python की standard library का हिस्सा है, इसलिए इसे अलग से इंस्टॉल करने की ज़रूरत नहीं होती।
✅ बस Python इंस्टॉल होना चाहिए।
python --version
यदि Python इंस्टॉल है, तो आप सीधे urllib का उपयोग कर सकते हैं।
📚 Main Modules of urllib
| Module Name | Use (उपयोग) |
|---|---|
urllib.request | URL से data भेजने और प्राप्त करने के लिए |
urllib.parse | URL को parse (टुकड़े करना) और modify करने के लिए |
urllib.error | errors handle करने के लिए |
urllib.robotparser | robots.txt parser पढ़ने के लिए (web scraping में उपयोगी) |
🧪 Basic Examples (बेसिक उदाहरण)
1. URL से Data डाउनलोड करना
from urllib.request import urlopen
url = "https://www.example.com"
response = urlopen(url)
html = response.read()
print(html.decode("utf-8"))
2. Query Parameters के साथ URL बनाना (GET Request)
from urllib.parse import urlencode
from urllib.request import urlopen
params = {'name': 'Himanshu', 'age': 25}
query_string = urlencode(params)
url = "https://httpbin.org/get?" + query_string
response = urlopen(url)
print(response.read().decode())

3. POST Request भेजना
from urllib import request, parse
data = parse.urlencode({'username': 'Himanshu', 'password': '1234'}).encode()
req = request.Request("https://httpbin.org/post", data=data, method="POST")
response = request.urlopen(req)
print(response.read().decode())
4. URL को Parse करना
from urllib.parse import urlparse
url = "https://www.example.com/page?name=Himanshu&age=25"
parsed = urlparse(url)
print(parsed.scheme) # https
print(parsed.netloc) # www.example.com
print(parsed.path) # /page
print(parsed.query) # name=Himanshu&age=25
💼 Real-world Project Examples (रियल प्रोजेक्ट उदाहरण)
📌 1. Web Scraper (वेब से डेटा खींचना)
उपयोग: News websites, job portals, या product data websites से डेटा खींचना।
from urllib.request import urlopen
from bs4 import BeautifulSoup # Requires: pip install beautifulsoup4
url = "https://news.ycombinator.com"
html = urlopen(url).read()
soup = BeautifulSoup(html, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
