✍️
Today IL
  • Today I learned!
  • Deployment
    • Rolling, Canary, Blue-green deployments
    • Kubernetees Helm Charts
  • AI/ML
    • SeldonIO
    • Installing software in E2E cloud compute
    • Watching nvidia-smi
    • How does github copilot works?
    • composer library
    • Better to pass callback in fit_one_cycle
    • Eliza - demo
    • Helsinki-NLP Translation models
  • Fastai Learning
  • Python
    • Understanding get_image_files in fastai
    • Resizing an image to smaller size
    • Extracting a Json Object as List(regex)
    • f-strings debugging shortcut
    • Pytest
    • conda switch between python versions
    • Nested functions exception handling
  • Programming
    • Installing Linux Operating system
    • Certbots usage
    • Code highlighting in Google Docs
    • HTTP Methods
    • How to use vertical mouse?
    • HTTP Status Codes
    • Keycloak, Oauth, OpenID connect, SAML
    • Why should NPM packages be as small as possible?
    • Clean Architecture
    • REST vs gRPC
    • Keycloak concepts
    • what is proxy server and nginx?
    • Asymptotic Time Complexity
  • async/await
    • JavaScript Asynchronous operation
    • Lesson 2- Eventloops
    • Lesson 1- asyncio history
    • Lesson 3- using coroutines
    • Lesson 4- coroutines in hood
    • Python async/await
    • JavaScript
  • R Programming language
    • Facet_grid and Facet_wrap
    • geom_point
  • C programming language
    • Inputting String in C programming language
    • Checking if a element is passed as input or not?
  • Git/Github
    • give credits to other people
    • one time setting to easily use Github
    • Checkout to specific tag
    • git suggestions in PR
    • Using emojis in git commits
  • Databases
    • Postgres Database Dockercompose
    • TIL New SQL Operators - Except, UNION, Distinct
    • Analysing Performance of DB Queries
    • Querying Date ranges in postgres
    • Handling Database disconnects in SQLAlchemy
  • WITH NO EXCEPT
  • What is difference with JSON documents in postgres and Mongodb
Powered by GitBook
On this page

Was this helpful?

  1. Databases

Handling Database disconnects in SQLAlchemy

PreviousQuerying Date ranges in postgresNextWITH NO EXCEPT

Last updated 3 years ago

Was this helpful?

Have you heard about connection pool?

A connection pool is a standard technique used to maintain long running connections in memory for efficient re-use, as well as to provide management for the total number of connections an application might use simultaneously.Particularly for server-side web applications, a connection pool is the standard way to maintain a “pool” of active database connections in memory which are reused across requests.

Connection pool is a bed rock for handling database connection. And if the database connection is full it can be messy at times.

A common use case is allow the connection pool to gracefully recover when the database server has been restarted, and all previously established connections are no longer functional. There are two approaches to this.

  1. Disconnet Handling - Pessimistic

  2. Disconnect Handling - Optimistic

  1. Pessimistic Disconnect Handling

Let's look at how we can implement pessimistic disconnection in flask-sqlalchemy package. In config, all you need to set is two configurations. In flask setting configuration you will need to set just the following option:

    SQLALCHEMY_ENGINE_OPTIONS = {
        "pool_pre_ping": True,
        "pool_recycle": 300,
    }

The pessimistic approach refers to emitting a test statement on the SQL connection at the start of each connection pool checkout, to test that the database connection is still viable. Typically, this is a simple statement like “SELECT 1”, but may also make use of some DBAPI-specific method to test the connection for liveness. The approach adds a small bit of overhead to the connection checkout process, however is otherwise the most simple and reliable approach to completely eliminating database errors due to stale pooled connections. The “pre ping” feature will normally emit SQL equivalent to “SELECT 1” each time a connection is checked out from the pool; if an error is raised that is detected as a “disconnect” situation, the connection will be immediately recycled, and all other pooled connections older than the current time are invalidated, so that the next time they are checked out, they will also be recycled before use.

If the database is still not available when “pre ping” runs, then the initial connect will fail and the error for failure to connect will be propagated normally. In the uncommon situation that the database is available for connections, but is not able to respond to a “ping”, the “pre_ping” will try up to three times before giving up, propagating the database error last received.

2. Optimistic DB disconnection

When pessimistic handling is not employed, as well as when the database is shutdown and/or restarted in the middle of a connection’s period of use within a transaction, the other approach to dealing with stale / closed connections is to let SQLAlchemy handle disconnects as they occur, at which point all connections in the pool are invalidated, meaning they are assumed to be stale and will be refreshed upon next checkout. This behavior assumes the is used in conjunction with a . The has logic which can detect disconnection events and refresh the pool automatically. When the attempts to use a DBAPI connection, and an exception is raised that corresponds to a “disconnect” event, the connection is invalidated. The then calls the method, effectively invalidating all connections not currently checked out so that they are replaced with new ones upon next checkout. This flow is illustrated by the code example below:

from sqlalchemy import create_engine, exc
e = create_engine(...)
c = e.connect()

try:
    # suppose the database has been restarted.
    c.execute(text("SELECT * FROM table"))
    c.close()
except exc.DBAPIError, e:
    # an exception is raised, Connection is invalidated.
    if e.connection_invalidated:
        print("Connection was invalidated!")

# after the invalidate event, a new connection
# starts with a new Pool
c = e.connect()
c.execute(text("SELECT * FROM table"))
Pool
Engine
Engine
Connection
Connection
Pool.recreate()