This tutorial will cover SQLAlchemy SQL Expressions, which are Python constructs that represent SQL statements. The tutorial is in doctest format, meaning each >>>
line represents something you can type at a Python command prompt, and the following text represents the expected return value. The tutorial has no prerequisites.
A quick check to verify that we are on at least version 0.4 of SQLAlchemy:
>>> import sqlalchemy >>> sqlalchemy.__version__ 0.4.0
For this tutorial we will use an in-memory-only SQLite database. This is an easy way to test things without needing to have an actual database defined anywhere. To connect we use create_engine()
:
>>> from sqlalchemy import create_engine >>> engine = create_engine('sqlite:///:memory:', echo=True)
The echo
flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python's standard logging
module. With it enabled, we'll see all the generated SQL produced. If you are working through this tutorial and want less output generated, set it to False
. This tutorial will format the SQL behind a popup window so it doesn't get in our way; just click the "SQL" links to see whats being generated.
The SQL Expression Language constructs its expressions in most cases against table columns. In SQLAlchemy, a column is most often represented by an object called Column
, and in all cases a Column
is associated with a Table
. A collection of Table
objects and their associated child objects is referred to as database metadata. In this tutorial we will explicitly lay out several Table
objects, but note that SA can also "import" whole sets of Table
objects automatically from an existing database (this process is called table reflection).
We define our tables all within a catalog called MetaData
, using the Table
construct, which resembles regular SQL CREATE TABLE statements. We'll make two tables, one of which represents "users" in an application, and another which represents zero or more "email addreses" for each row in the "users" table:
>>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey >>> metadata = MetaData() >>> users = Table('users', metadata, ... Column('id', Integer, primary_key=True), ... Column('name', String(40)), ... Column('fullname', String(100)), ... ) >>> addresses = Table('addresses', metadata, ... Column('id', Integer, primary_key=True), ... Column('user_id', None, ForeignKey('users.id')), ... Column('email_address', String(50), nullable=False) ... )
All about how to define Table
objects, as well as how to create them from an existing database automatically, is described in Database Meta Data.
Next, to tell the MetaData
we'd actually like to create our selection of tables for real inside the SQLite database, we use create_all()
, passing it the engine
instance which points to our database. This will check for the presence of each table first before creating, so its safe to call multiple times:
sql>>> metadata.create_all(engine)
The first SQL expression we'll create is the Insert
construct, which represents an INSERT statement. This is typically created relative to its target table:
>>> ins = users.insert()
To see a sample of the SQL this construct produces, use the str()
function:
>>> str(ins) 'INSERT INTO users (id, name, fullname) VALUES (:id, :name, :fullname)'
Notice above that the INSERT statement names every column in the users
table. This can be limited by using the values
keyword, which establishes the VALUES clause of the INSERT explicitly:
>>> ins = users.insert(values={'name':'jack', 'fullname':'Jack Jones'}) >>> str(ins) 'INSERT INTO users (name, fullname) VALUES (:name, :fullname)'
Above, while the values
keyword limited the VALUES clause to just two columns, the actual data we placed in values
didn't get rendered into the string; instead we got named bind parameters. As it turns out, our data is stored within our Insert
construct, but it typically only comes out when the statement is actually executed; since the data consists of literal values, SQLAlchemy automatically generates bind parameters for them. We can peek at this data for now by looking at the compiled form of the statement:
>>> ins.compile().params ClauseParameters:{'fullname': 'Jack Jones', 'name': 'jack'}
The interesting part of an Insert
is executing it. In this tutorial, we will generally focus on the most explicit method of executing a SQL construct, and later touch upon some "shortcut" ways to do it. The engine
object we created is a repository for database connections capable of issuing SQL to the database. To acquire a connection, we use the connect()
method:
>>> conn = engine.connect() >>> conn <sqlalchemy.engine.base.Connection object at 0x...>
The Connection
object represents an actively checked out DBAPI connection resource. Lets feed it our Insert
object and see what happens:
>>> result = conn.execute(ins)
So the INSERT statement was now issued to the database. Although we got positional "qmark" bind parameters instead of "named" bind params in the output. How come ? Because when executed, the Connection
used the SQLite dialect to help generate the statement; when we use the str()
function, the statement isn't aware of this dialect, and falls back onto a default which uses named params. We can view this manually as follows:
>>> from sqlalchemy.databases.sqlite import SQLiteDialect >>> compiled = ins.compile(dialect=SQLiteDialect()) >>> str(compiled) 'INSERT INTO users (name, fullname) VALUES (?, ?)'
What about the result
variable we got when we called execute()
? As the SQLAlchemy Connection
object references a DBAPI connection, the result, known as a ResultProxy
object, is analgous to the DBAPI cursor object. In the case of an INSERT, we can get important information from it, such as the primary key values which were generated from our statement:
>>> result.last_inserted_ids() [1]
The value of 1
was automatically generated by SQLite, but only because we did not specify the id
column in our Insert
statement; otherwise, our explicit value would have been used. In either case, SQLAlchemy always knows how to get at a newly generated primary key value, even though the method of generating them is different across different databases; each databases' Dialect
knows the specific steps needed to determine the correct value (or values; note that last_inserted_ids()
returns a list so that it supports composite primary keys).
Our insert example above was intentionally a little drawn out to show some various behaviors of expression language constructs. In the usual case, an Insert
statement is usually compiled against the parameters sent to the execute()
method on Connection
, so that there's no need to use the values
keyword with Insert
. Lets create a generic Insert
statement again and use it in the "normal" way:
>>> ins = users.insert() >>> conn.execute(ins, id=2, name='wendy', fullname='Wendy Williams')
<sqlalchemy.engine.base.ResultProxy object at 0x...>
Above, because we specified all three columns in the the execute()
method, the compiled Insert
included all three columns. The Insert
statement is compiled at execution time based on the parameters we specified; if we specified fewer parameters, the Insert
would have fewer entries in its VALUES clause.
To issue many inserts using DBAPI's executemany()
method, we can send in a list of dictionaries each containing a distinct set of parameters to be inserted, as we do here to add some email addresses:
>>> conn.execute(addresses.insert(), [ ... {'user_id': 1, 'email_address' : 'jack@yahoo.com'}, ... {'user_id': 1, 'email_address' : 'jack@msn.com'}, ... {'user_id': 2, 'email_address' : 'www@www.org'}, ... {'user_id': 2, 'email_address' : 'wendy@aol.com'}, ... ])
<sqlalchemy.engine.base.ResultProxy object at 0x...>
Above, we again relied upon SQLite's automatic generation of primary key identifiers for each addresses
row.
When executing multiple sets of parameters, each dictionary must have the same set of keys; i.e. you cant have fewer keys in some dictionaries than others. This is because the Insert
statement is compiled against the first dictionary in the list, and its assumed that all subsequent argument dictionaries are compatible with that statement.
We're executing our Insert
using a Connection
. There's two options that allow you to not have to deal with the connection part. You can execute in the connectionless style, using the engine, which opens and closes a connection for you:
sql>>> result = engine.execute(users.insert(), name='fred', fullname="Fred Flintstone")
and you can save even more steps than that, if you connect the Engine
to the MetaData
object we created earlier. When this is done, all SQL expressions which involve tables within the MetaData
object will be automatically bound to the Engine
. In this case, we call it implicit execution:
>>> metadata.bind = engine sql>>> result = users.insert().execute(name="mary", fullname="Mary Contrary")
When the MetaData
is bound, statements will also compile against the engine's dialect. Since a lot of the examples here assume the default dialect, we'll detach the engine from the metadata which we just attached:
>>> metadata.bind = None
Detailed examples of connectionless and implicit execution are available in the "Engines" chapter: Connectionless Execution, Implicit Execution.
back to section topWe began with inserts just so that our test database had some data in it. The more interesting part of the data is selecting it ! We'll cover UPDATE and DELETE statements later. The primary construct used to generate SELECT statements is the select()
function:
>>> from sqlalchemy.sql import select >>> s = select([users]) >>> result = conn.execute(s)
Above, we issued a basic select()
call, placing the users
table within the COLUMNS clause of the select, and then executing. SQLAlchemy expanded the users
table into the set of each of its columns, and also generated a FROM clause for us. The result returned is again a ResultProxy
object, which acts much like a DBAPI cursor, including methods such as fetchone()
and fetchall()
. The easiest way to get rows from it is to just iterate:
>>> for row in result: ... print row (1, u'jack', u'Jack Jones') (2, u'wendy', u'Wendy Williams') (3, u'fred', u'Fred Flintstone') (4, u'mary', u'Mary Contrary')
Above, we see that printing each row produces a simple tuple-like result. We have more options at accessing the data in each row. One very common way is through dictionary access, using the string names of columns:
sql>>> result = conn.execute(s)
>>> row = result.fetchone() >>> print "name:", row['name'], "; fullname:", row['fullname'] name: jack ; fullname: Jack Jones
Integer indexes work as well:
>>> row = result.fetchone() >>> print "name:", row[1], "; fullname:", row[2] name: wendy ; fullname: Wendy Williams
But another way, whose usefulness will become apparent later on, is to use the Column
objects directly as keys:
sql>>> for row in conn.execute(s): ... print "name:", row[users.c.name], "; fullname:", row[users.c.fullname]
name: jack ; fullname: Jack Jones name: wendy ; fullname: Wendy Williams name: fred ; fullname: Fred Flintstone name: mary ; fullname: Mary Contrary
Result sets which have pending rows remaining should be explicitly closed before discarding. While the resources referenced by the ResultProxy
will be closed when the object is garbage collected, it's better to make it explicit as some database APIs are very picky about such things:
>>> result.close()
If we'd like to more carefully control the columns which are placed in the COLUMNS clause of the select, we reference individual Column
objects from our Table
. These are available as named attributes off the c
attribute of the Table
object:
>>> s = select([users.c.name, users.c.fullname]) sql>>> result = conn.execute(s)
>>> for row in result: ... print row (u'jack', u'Jack Jones') (u'wendy', u'Wendy Williams') (u'fred', u'Fred Flintstone') (u'mary', u'Mary Contrary')
Lets observe something interesting about the FROM clause. Whereas the generated statement contains two distinct sections, a "SELECT columns" part and a "FROM table" part, our select()
construct only has a list containing columns. How does this work ? Let's try putting two tables into our select()
statement:
sql>>> for row in conn.execute(select([users, addresses])): ... print row
(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com') (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com') (1, u'jack', u'Jack Jones', 3, 2, u'www@www.org') (1, u'jack', u'Jack Jones', 4, 2, u'wendy@aol.com') (2, u'wendy', u'Wendy Williams', 1, 1, u'jack@yahoo.com') (2, u'wendy', u'Wendy Williams', 2, 1, u'jack@msn.com') (2, u'wendy', u'Wendy Williams', 3, 2, u'www@www.org') (2, u'wendy', u'Wendy Williams', 4, 2, u'wendy@aol.com') (3, u'fred', u'Fred Flintstone', 1, 1, u'jack@yahoo.com') (3, u'fred', u'Fred Flintstone', 2, 1, u'jack@msn.com') (3, u'fred', u'Fred Flintstone', 3, 2, u'www@www.org') (3, u'fred', u'Fred Flintstone', 4, 2, u'wendy@aol.com') (4, u'mary', u'Mary Contrary', 1, 1, u'jack@yahoo.com') (4, u'mary', u'Mary Contrary', 2, 1, u'jack@msn.com') (4, u'mary', u'Mary Contrary', 3, 2, u'www@www.org') (4, u'mary', u'Mary Contrary', 4, 2, u'wendy@aol.com')
It placed both tables into the FROM clause. But also, it made a real mess. Those who are familiar with SQL joins know that this is a cartesian product; each row from the users
table is produced against each row from the addresses
table. So to put some sanity into this statement, we need a WHERE clause. Which brings us to the second argument of select()
:
>>> s = select([users, addresses], users.c.id==addresses.c.user_id) sql>>> for row in conn.execute(s): ... print row
(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com') (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com') (2, u'wendy', u'Wendy Williams', 3, 2, u'www@www.org') (2, u'wendy', u'Wendy Williams', 4, 2, u'wendy@aol.com')
So that looks a lot better, we added an expression to our select()
which had the effect of adding WHERE users.id = addresses.user_id
to our statement, and our results were managed down so that the join of users
and addresses
rows made sense. But let's look at that expression? It's using just a Python equality operator between two different Column
objects. It should be clear that something is up. Saying 1==1
produces True
, and 1==2
produces False
, not a WHERE clause. So lets see exactly what that expression is doing:
>>> users.c.id==addresses.c.user_id <sqlalchemy.sql.expression._BinaryExpression object at 0x...>
Wow, surprise ! This is neither a True
nor a False
. Well what is it ?
>>> str(users.c.id==addresses.c.user_id) 'users.id = addresses.user_id'
As you can see, the ==
operator is producing an object that is very much like the Insert
and select()
objects we've made so far, thanks to Python's __eq__()
builtin; you call str()
on it and it produces SQL. By now, one can that everything we are working with is ultimately the same type of object. SQLAlchemy terms the base class of all of these expessions as sqlalchemy.sql.ClauseElement
.
Since we've stumbled upon SQLAlchemy's operator paradigm, let's go through some of its capabilities. We've seen how to equate two columns to each other:
>>> print users.c.id==addresses.c.user_id users.id = addresses.user_id
If we use a literal value (a literal meaning, not a SQLAlchemy clause object), we get a bind parameter:
>>> print users.c.id==7 users.id = :users_id
The 7
literal is embedded in ClauseElement
; we can use the same trick we did with the Insert
object to see it:
>>> (users.c.id==7).compile().params ClauseParameters:{'users_id': 7}
Most Python operators, as it turns out, produce a SQL expression here, like equals, not equals, etc.:
>>> print users.c.id != 7 users.id != :users_id >>> # None converts to IS NULL >>> print users.c.name == None users.name IS NULL >>> # reverse works too >>> print 'fred' > users.c.name users.name < :users_name
If we add two integer columns together, we get an addition expression:
>>> print users.c.id + addresses.c.id users.id + addresses.id
Interestingly, the type of the Column
is important ! If we use +
with two string based columns (recall we put types like Integer
and String
on our Column
objects at the beginning), we get something different:
>>> print users.c.name + users.c.fullname users.name || users.fullname
Where ||
is the string concatenation operator used on most databases. But not all of them. MySQL users, fear not:
>>> from sqlalchemy.databases.mysql import MySQLDialect >>> print (users.c.name + users.c.fullname).compile(dialect=MySQLDialect()) concat(users.name, users.fullname)
The above illustrates the SQL that's generated for an Engine
that's connected to a MySQL database (note that the Dialect
is normally created behind the scenes; we created one above just to illustrate without using an engine).
If you have come across an operator which really isn't available, you can always use the op()
method; this generates whatever operator you need:
>>> print users.c.name.op('tiddlywinks')('foo') users.name tiddlywinks :users_name
We'd like to show off some of our operators inside of select()
constructs. But we need to lump them together a little more, so let's first introduce some conjunctions. Conjunctions are those little words like AND and OR that put things together. We'll also hit upon NOT. AND, OR and NOT can work from the corresponding functions SQLAlchemy provides (notice we also throw in a LIKE):
>>> from sqlalchemy.sql import and_, or_, not_ >>> print and_(users.c.name.like('j%'), users.c.id==addresses.c.user_id, ... or_(addresses.c.email_address=='wendy@aol.com', addresses.c.email_address=='jack@yahoo.com'), ... not_(users.c.id>5)) users.name LIKE :users_name AND users.id = addresses.user_id AND (addresses.email_address = :addresses_email_address OR addresses.email_address = :addresses_email_address_1) AND users.id <= :users_id
And you can also use the re-jiggered bitwise AND, OR and NOT operators, although because of Python operator precedence you have to watch your parenthesis:
>>> print users.c.name.like('j%') & (users.c.id==addresses.c.user_id) & \ ... ((addresses.c.email_address=='wendy@aol.com') | (addresses.c.email_address=='jack@yahoo.com')) \ ... & ~(users.c.id>5) users.name LIKE :users_name AND users.id = addresses.user_id AND (addresses.email_address = :addresses_email_address OR addresses.email_address = :addresses_email_address_1) AND users.id <= :users_id
So with all of this vocabulary, let's select all users who have an email address at AOL or MSN, whose name starts with a letter between "m" and "z", and we'll also generate a column containing their full name combined with their email address. We will add two new constructs to this statement, between()
and label()
. between()
produces a BETWEEN clause, and label()
is used in a column expression to produce labels using the AS
keyword; its recommended when selecting from expressions that otherwise would not have a name:
>>> s = select([(users.c.fullname + ", " + addresses.c.email_address).label('title')], ... and_( ... users.c.id==addresses.c.user_id, ... users.c.name.between('m', 'z'), ... or_( ... addresses.c.email_address.like('%@aol.com'), ... addresses.c.email_address.like('%@msn.com') ... ) ... ) ... ) >>> print conn.execute(s).fetchall() SELECT users.fullname || ? || addresses.email_address AS title FROM users, addresses WHERE users.id = addresses.user_id AND users.name BETWEEN ? AND ? AND (addresses.email_address LIKE ? OR addresses.email_address LIKE ?) [', ', 'm', 'z', '%@aol.com', '%@msn.com'] [(u'Wendy Williams, wendy@aol.com',)]
Once again, SQLAlchemy figured out the FROM clause for our statement. In fact it will determine the FROM clause based on all of its other bits; the columns clause, the whereclause, and also some other elements which we haven't covered yet, which include ORDER BY, GROUP BY, and HAVING.
back to section topOur last example really became a handful to type. Going from what one understands to be a textual SQL expression into a Python construct which groups components together in a programmatic style can be hard. That's why SQLAlchemy lets you just use strings too. The text()
construct represents any textual statement. To use bind parameters with text()
, always use the named colon format. Such as below, we create a text()
and execute it, feeding in the bind parameters to the execute()
method:
>>> from sqlalchemy.sql import text >>> s = text("""SELECT users.fullname || ', ' || addresses.email_address AS title ... FROM users, addresses ... WHERE users.id = addresses.user_id AND users.name BETWEEN :x AND :y AND ... (addresses.email_address LIKE :e1 OR addresses.email_address LIKE :e2) ... """) sql>>> print conn.execute(s, x='m', y='z', e1='%@aol.com', e2='%@msn.com').fetchall()
[(u'Wendy Williams, wendy@aol.com',)]
To gain a "hybrid" approach, any of SA's SQL constructs can have text freely intermingled wherever you like - the text()
construct can be placed within any other ClauseElement
construct, and when used in a non-operator context, a direct string may be placed which converts to text()
automatically. Below we combine the usage of text()
and strings with our constructed select()
object, by using the select()
object to structure the statement, and the text()
/strings to provide all the content within the structure. For this example, SQLAlchemy is not given any Column
or Table
objects in any of its expressions, so it cannot generate a FROM clause. So we also give it the from_obj
keyword argument, which is a list of ClauseElements
(or strings) to be placed within the FROM clause:
>>> s = select([text("users.fullname || ', ' || addresses.email_address AS title")], ... and_( ... "users.id = addresses.user_id", ... "users.name BETWEEN 'm' AND 'z'", ... "(addresses.email_address LIKE :x OR addresses.email_address LIKE :y)" ... ), ... from_obj=['users', 'addresses'] ... ) sql>>> print conn.execute(s, x='%@aol.com', y='%@msn.com').fetchall()
[(u'Wendy Williams, wendy@aol.com',)]
Going from constructed SQL to text, we lose some capabilities. We lose the capability for SQLAlchemy to compile our expression to a specific target database; above, our expression won't work with MySQL since it has no ||
construct. It also becomes more tedious for SQLAlchemy to be made aware of the datatypes in use; for example, if our bind parameters required UTF-8 encoding before going in, or conversion from a Python datetime
into a string (as is required with SQLite), we would have to add extra information to our text()
construct. Similar issues arise on the result set side, where SQLAlchemy also performs type-specific data conversion in some cases; still more information can be added to text()
to work around this. But what we really lose from our statement is the ability to manipulate it, transform it, and analyze it. These features are critical when using the ORM, which makes heavy usage of relational transformations. To show off what we mean, we'll first introduce the ALIAS construct and the JOIN construct, just so we have some juicier bits to play with.
The alias corresponds to a "renamed" version of a table or arbitrary relation, which occurs anytime you say "SELECT .. FROM sometable AS someothername". The AS
creates a new name for the table. Aliases are super important in SQL as they allow you to reference the same table more than once. Scenarios where you need to do this include when you self-join a table to itself, or more commonly when you need to join from a parent table to a child table multiple times. For example, we know that our user jack
has two email addresses. How can we locate jack based on the combination of those two addresses? We need to join twice to it. Let's construct two distinct aliases for the addresses
table and join:
>>> a1 = addresses.alias('a1') >>> a2 = addresses.alias('a2') >>> s = select([users], and_( ... users.c.id==a1.c.user_id, ... users.c.id==a2.c.user_id, ... a1.c.email_address=='jack@msn.com', ... a2.c.email_address=='jack@yahoo.com' ... )) sql>>> print conn.execute(s).fetchall()
[(1, u'jack', u'Jack Jones')]
Easy enough. One thing that we're going for with the SQL Expression Language is the melding of programmatic behavior with SQL generation. Coming up with names like a1
and a2
is messy; we really didn't need to use those names anywhere, its just the database that needed them. Plus, we might write some code that uses alias objects that came from several different places, and its difficult to ensure that they all have unique names. So instead, we just let SQLAlchemy make the names for us, using "anonymous" aliases:
>>> a1 = addresses.alias() >>> a2 = addresses.alias() >>> s = select([users], and_( ... users.c.id==a1.c.user_id, ... users.c.id==a2.c.user_id, ... a1.c.email_address=='jack@msn.com', ... a2.c.email_address=='jack@yahoo.com' ... )) sql>>> print conn.execute(s).fetchall()
[(1, u'jack', u'Jack Jones')]
One super-huge advantage of anonymous aliases is that not only did we not have to guess up a random name, but we can also be guaranteed that the above SQL string is deterministically generated to be the same every time. This is important for databases such as Oracle which cache compiled "query plans" for their statements, and need to see the same SQL string in order to make use of it.
Aliases can of course be used for anything which you can SELECT from, including SELECT statements themselves. We can self-join the users
table back to the select()
we've created by making an alias of the entire statement. The correlate(None)
directive is to avoid SQLAlchemy's attempt to "correlate" the inner users
table with the outer one:
>>> a1 = s.correlate(None).alias() >>> s = select([users.c.name], users.c.id==a1.c.id) sql>>> print conn.execute(s).fetchall()
[(u'jack',)]
We're halfway along to being able to construct any SELECT expression. The next cornerstone of the SELECT is the JOIN expression. We've already been doing joins in our examples, by just placing two tables in either the columns clause or the where clause of the select()
construct. But if we want to make a real "JOIN" or "OUTERJOIN" construct, we use the join()
and outerjoin()
methods, most commonly accessed from the left table in the join:
>>> print users.join(addresses) users JOIN addresses ON users.id = addresses.user_id
The alert reader will see more surprises; SQLAlchemy figured out how to JOIN the two tables ! The ON condition of the join, as it's called, was automatically generated based on the ForeignKey
object which we placed on the addresses
table way at the beginning of this tutorial. Already the join()
construct is looking like a much better way to join tables.
Of course you can join on whatever expression you want, such as if we want to join on all users who use the same name in their email address as their username:
>>> print users.join(addresses, addresses.c.email_address.like(users.c.name + '%')) users JOIN addresses ON addresses.email_address LIKE users.name || :users_name
When we create a select()
construct, SQLAlchemy looks around at the tables we've mentioned and then places them in the FROM clause of the statement. When we use JOINs however, we know what FROM clause we want, so here we make usage of the from_obj
keyword argument:
>>> s = select([users.c.fullname], from_obj=[ ... users.join(addresses, addresses.c.email_address.like(users.c.name + '%')) ... ]) sql>>> print conn.execute(s).fetchall()
[(u'Jack Jones',), (u'Jack Jones',), (u'Wendy Williams',)]
The outerjoin()
function just creates LEFT OUTER JOIN
constructs. It's used just like join()
:
>>> s = select([users.c.fullname], from_obj=[users.outerjoin(addresses)]) >>> print s SELECT users.fullname FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id
That's the output outerjoin()
produces, unless, of course, you're stuck in a gig using Oracle prior to version 9, and you've set up your engine (which would be using OracleDialect
) to use Oracle-specific SQL:
>>> from sqlalchemy.databases.oracle import OracleDialect >>> print s.compile(dialect=OracleDialect(use_ansi=False)) SELECT users.fullname FROM users, addresses WHERE users.id = addresses.user_id(+)
If you don't know what that SQL means, don't worry ! The secret tribe of Oracle DBAs don't want their black magic being found out ;).
back to section topWe've now gained the ability to construct very sophisticated statements. We can use all kinds of operators, table constructs, text, joins, and aliases. The point of all of this, as mentioned earlier, is not that it's an "easier" or "better" way to write SQL than just writing a SQL statement yourself; the point is that its better for writing programmatically generated SQL which can be morphed and adapted as needed in automated scenarios.
To support this, the select()
construct we've been working with supports piecemeal construction, in addition to the "all at once" method we've been doing. Suppose you're writing a search function, which receives criterion and then must construct a select from it. To accomplish this, upon each criterion encountered, you apply "generative" criterion to an existing select()
construct with new elements, one at a time. We start with a basic select()
constructed with the shortcut method available on the users
table:
>>> query = users.select() >>> print query SELECT users.id, users.name, users.fullname FROM users
We encounter search criterion of "name='jack'". So we apply WHERE criterion stating such:
>>> query = query.where(users.c.name=='jack')
Next, we encounter that they'd like the results in descending order by full name. We apply ORDER BY, using an extra modifier desc
:
>>> query = query.order_by(users.c.fullname.desc())
We also come across that they'd like only users who have an address at MSN. A quick way to tack this on is by using an EXISTS clause, which we correlate to the users
table in the enclosing SELECT:
>>> from sqlalchemy.sql import exists >>> query = query.where( ... exists([addresses.c.id], ... and_(addresses.c.user_id==users.c.id, addresses.c.email_address.like('%@msn.com')) ... ).correlate(users))
And finally, the application also wants to see the listing of email addresses at once; so to save queries, we outerjoin the addresses
table (using an outer join so that users with no addresses come back as well; since we're programmatic, we might not have kept track that we used an EXISTS clause against the addresses
table too...). Additionally, since the users
and addresses
table both have a column named id
, let's isolate their names from each other in the COLUMNS clause by using labels:
>>> query = query.column(addresses).select_from(users.outerjoin(addresses)).apply_labels()
Let's bake for .0001 seconds and see what rises:
>>> conn.execute(query).fetchall()
[(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com'), (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')]
So we started small, added one little thing at a time, and at the end we have a huge statement..which actually works. Now let's do one more thing; the searching function wants to add another email_address
criterion on, however it doesn't want to construct an alias of the addresses
table; suppose many parts of the application are written to deal specifically with the addresses
table, and to change all those functions to support receiving an arbitrary alias of the address would be cumbersome. We can actually convert the addresses
table within the existing statement to be an alias of itself, using replace_selectable()
:
>>> a1 = addresses.alias() >>> query = query.replace_selectable(addresses, a1) >>> print query
One more thing though, with automatic labeling applied as well as anonymous aliasing, how do we retrieve the columns from the rows for this thing ? The label for the email_addresses
column is now the generated name addresses_1_email_address
; and in another statement might be something different ! This is where accessing by result columns by Column
object becomes very useful:
sql>>> for row in conn.execute(query): ... print "Name:", row[users.c.name], "; Email Address", row[a1.c.email_address]
Name: jack ; Email Address jack@yahoo.com Name: jack ; Email Address jack@msn.com
The above example, by it's end, got significantly more intense than the typical end-user constructed SQL will usually be. However when writing higher-level tools such as ORMs, they become more significant. SQLAlchemy's ORM relies very heavily on techniques like this.
back to section topThe concepts of creating SQL expressions have been introduced. What's left are more variants of the same themes. So now we'll catalog the rest of the important things we'll need to know.
Throughout all these examples, SQLAlchemy is busy creating bind parameters wherever literal expressions occur. You can also specify your own bind parameters with your own names, and use the same statement repeatedly. The database dialect converts to the appropriate named or positional style, as here where it converts to positional for SQLite:
>>> from sqlalchemy.sql import bindparam >>> s = users.select(users.c.name==bindparam('username')) sql>>> conn.execute(s, username='wendy').fetchall()
[(2, u'wendy', u'Wendy Williams')]
Another important aspect of bind paramters is that they may be assigned a type. The type of the bind paramter will determine it's behavior within expressions and also how the data bound to it is processed before being sent off to the database:
>>> s = users.select(users.c.name.like(bindparam('username', type_=String) + text("'%'"))) sql>>> conn.execute(s, username='wendy').fetchall()
[(2, u'wendy', u'Wendy Williams')]
Bind parameters of the same name can also be used multiple times, where only a single named value is needed in the execute paramters:
>>> s = select([users, addresses], ... users.c.name.like(bindparam('name', type_=String) + text("'%'")) | ... addresses.c.email_address.like(bindparam('name', type_=String) + text("'@%'")), ... from_obj=[users.outerjoin(addresses)]) sql>>> conn.execute(s, name='jack').fetchall()
[(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com'), (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')]
SQL functions are created using the func
keyword, which generates functions using attribute access:
>>> from sqlalchemy.sql import func >>> print func.now() now() >>> print func.concat('x', 'y') concat(:concat, :concat_1)
Certain functions are marked as "ANSI" functions, which mean they don't get the parenthesis added after them, such as CURRENT_TIMESTAMP:
>>> print func.current_timestamp() current_timestamp
Functions are most typically used in the columns clause of a select statement, and can also be labeled as well as given a type. Labeling a function is recommended so that the result can be targeted in a result row based on a string name, and assigning it a type is required when you need result-set processing to occur, such as for unicode conversion and date conversions. Below, we use the result function scalar()
to just read the first column of the first row and then close the result; the label, even though present, is not important in this case:
>>> print conn.execute( ... select([func.max(addresses.c.email_address, type_=String).label('maxemail')]) ... ).scalar()
www@www.org
Databases such as Postgres and Oracle which support functions that return whole result sets can be assembled into selectable units, which can be used in statements. Such as, a database function calculate()
which takes the parameters x
and y
, and returns three columns which we'd like to name q
, z
and r
, we can construct using "lexical" column objects as well as bind parameters:
>>> from sqlalchemy.sql import column >>> calculate = select([column('q'), column('z'), column('r')], ... from_obj=[func.calculate(bindparam('x'), bindparam('y'))]) >>> print select([users], users.c.id > calculate.c.z) SELECT users.id, users.name, users.fullname FROM users, (SELECT q, z, r FROM calculate(:x, :y)) WHERE users.id > z
If we wanted to use our calculate
statement twice with different bind parameters, the unique_params()
function will create copies for us, and mark the bind params as "unique" so that conflicting names are isolated. Note we also make two separate aliases of our selectable:
>>> s = select([users], users.c.id.between( ... calculate.alias('c1').unique_params(x=17, y=45).c.z, ... calculate.alias('c2').unique_params(x=5, y=12).c.z)) >>> print s SELECT users.id, users.name, users.fullname FROM users, (SELECT q, z, r FROM calculate(:x, :y)) AS c1, (SELECT q, z, r FROM calculate(:x_1, :y_1)) AS c2 WHERE users.id BETWEEN c1.z AND c2.z >>> s.compile().params ClauseParameters:{'y': 45, 'x': 17, 'y_1': 12, 'x_1': 5}
Unions come in two flavors, UNION and UNION ALL, which are available via module level functions or methods off a Selectable:
>>> u = addresses.select(addresses.c.email_address=='foo@bar.com').union( ... addresses.select(addresses.c.email_address.like('%@yahoo.com')), ... ).order_by(addresses.c.email_address) sql>>> print conn.execute(u).fetchall()
[(1, 1, u'jack@yahoo.com')]
Also available, though not supported on all databases, are intersect()
, intersect_all()
, except_()
, and except_all()
:
>>> u = addresses.select(addresses.c.email_address.like('%@%.com')).except_( ... addresses.select(addresses.c.email_address.like('%@msn.com')) ... ) sql>>> print conn.execute(u).fetchall()
[(1, 1, u'jack@yahoo.com'), (4, 2, u'wendy@aol.com')]
To embed a SELECT in a column expression, use as_scalar()
:
sql>>> print conn.execute(select([ ... users.c.name, ... select([func.count(addresses.c.id)], users.c.id==addresses.c.user_id).as_scalar() ... ])).fetchall()
[(u'jack', 2), (u'wendy', 2), (u'fred', 0), (u'mary', 0)]
Alternatively, applying a label()
to a select evaluates it as a scalar as well:
sql>>> print conn.execute(select([ ... users.c.name, ... select([func.count(addresses.c.id)], users.c.id==addresses.c.user_id).label('address_count') ... ])).fetchall()
[(u'jack', 2), (u'wendy', 2), (u'fred', 0), (u'mary', 0)]
Notice in the examples on "scalar selects", the FROM clause of each embedded select did not contain the users
table in it's FROM clause. This is because SQLAlchemy automatically attempts to correlate embeded FROM objects to that of an enclosing query. To disable this, or to specify explicit FROM clauses to be correlated, use correlate()
:
>>> s = select([users.c.name], users.c.id==select([users.c.id]).correlate(None)) >>> print s SELECT users.name FROM users WHERE users.id = (SELECT users.id FROM users) >>> s = select([users.c.name, addresses.c.email_address], users.c.id== ... select([users.c.id], users.c.id==addresses.c.user_id).correlate(addresses) ... ) >>> print s SELECT users.name, addresses.email_address FROM users, addresses WHERE users.id = (SELECT users.id FROM users WHERE users.id = addresses.user_id)
The select()
function can take keyword arguments order_by
, group_by
(as well as having
), limit
, and offset
. There's also distinct=True
. These are all also available as generative functions. order_by()
expressions can use the modifiers asc()
or desc()
to indicate ascending or descending.
>>> s = select([addresses.c.user_id, func.count(addresses.c.id)]).\ ... group_by(addresses.c.user_id).having(func.count(addresses.c.id>1)) >>> print conn.execute(s).fetchall()
[(1, 2), (2, 2)] >>> s = select([addresses.c.email_address, addresses.c.id]).distinct().\ ... order_by(addresses.c.email_address.desc(), addresses.c.id) >>> conn.execute(s).fetchall()
[(u'www@www.org', 3), (u'wendy@aol.com', 4), (u'jack@yahoo.com', 1), (u'jack@msn.com', 2)] >>> s = select([addresses]).offset(1).limit(1) >>> print conn.execute(s).fetchall()
[(2, 1, u'jack@msn.com')]
Finally, we're back to UPDATE. Updates work a lot like INSERTS, except there is an additional WHERE clause that can be specified.
>>> # change 'jack' to 'ed' sql>>> conn.execute(users.update(users.c.name=='jack'), name='ed')
<sqlalchemy.engine.base.ResultProxy object at 0x...> >>> # use bind parameters >>> u = users.update(users.c.name==bindparam('oldname'), values={'name':bindparam('newname')}) sql>>> conn.execute(u, oldname='jack', newname='ed')
<sqlalchemy.engine.base.ResultProxy object at 0x...> >>> # update a column to an expression sql>>> conn.execute(users.update(values={users.c.fullname:"Fullname: " + users.c.name}))
<sqlalchemy.engine.base.ResultProxy object at 0x...>
A correlated update lets you update a table using selection from another table, or the same table:
>>> s = select([addresses.c.email_address], addresses.c.user_id==users.c.id).limit(1) sql>>> conn.execute(users.update(values={users.c.fullname:s}))
<sqlalchemy.engine.base.ResultProxy object at 0x...>
Finally, a delete. Easy enough:
sql>>> conn.execute(addresses.delete())
<sqlalchemy.engine.base.ResultProxy object at 0x...> sql>>> conn.execute(users.delete(users.c.name > 'm'))
<sqlalchemy.engine.base.ResultProxy object at 0x...>
The best place to get every possible name you can use in constructed SQL is the Generated Documentation.
Table Metadata Reference: Database Meta Data
Engine/Connection/Execution Reference: Database Engines
SQL Types: The Types System
back to section top