In this tutorial we will cover a basic SQLAlchemy object-relational mapping scenario, where we store and retrieve Python objects from a database representation. The database schema will begin with one table, and will later develop into several. The tutorial is in doctest format, meaning each >>>
line represents something you can type at a Python command prompt, and the following text represents the expected return value. The tutorial has no prerequisites.
A quick check to verify that we are on at least version 0.4 of SQLAlchemy:
>>> import sqlalchemy >>> sqlalchemy.__version__ 0.4.0
For this tutorial we will use an in-memory-only SQLite database. This is an easy way to test things without needing to have an actual database defined anywhere. To connect we use create_engine()
:
>>> from sqlalchemy import create_engine >>> engine = create_engine('sqlite:///:memory:', echo=True)
The echo
flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python's standard logging
module. With it enabled, we'll see all the generated SQL produced. If you are working through this tutorial and want less output generated, set it to False
. This tutorial will format the SQL behind a popup window so it doesn't get in our way; just click the "SQL" links to see whats being generated.
Next we want to tell SQLAlchemy about our tables. We will start with just a single table called users
, which will store records for the end-users using our application (lets assume its a website). We define our tables all within a catalog called MetaData
, using the Table
construct, which resembles regular SQL CREATE TABLE syntax:
>>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey >>> metadata = MetaData() >>> users_table = Table('users', metadata, ... Column('id', Integer, primary_key=True), ... Column('name', String(40)), ... Column('fullname', String(100)), ... Column('password', String(15)) ... )
All about how to define Table
objects, as well as how to create them from an existing database automatically, is described in Database Meta Data.
Next, to tell the MetaData
we'd actually like to create our users_table
for real inside the SQLite database, we use create_all()
, passing it the engine
instance which points to our database. This will check for the presence of a table first before creating, so its safe to call multiple times:
sql>>> metadata.create_all(engine)
So now our database is created, our initial schema is present, and our SQLAlchemy application knows all about the tables and columns in the database; this information is to be re-used by the Object Relational Mapper, as we'll see now.
back to section topSo lets create a rudimental User
object to be mapped in the database. This object will for starters have three attributes, name
, fullname
and password
. It only need subclass Python's built-in object
class (i.e. its a new style class). We will give it a constructor so that it may conveniently be instantiated with its attributes at once, as well as a __repr__
method so that we can get a nice string representation of it:
>>> class User(object): ... def __init__(self, name, fullname, password): ... self.name = name ... self.fullname = fullname ... self.password = password ... ... def __repr__(self): ... return "<User(%r,%r, %r)>" % (self.name, self.fullname, self.password)
With our users_table
and User
class, we now want to map the two together. That's where the SQLAlchemy ORM package comes in. We'll use the mapper
function to create a mapping between users_table
and User
:
>>> from sqlalchemy.orm import mapper >>> mapper(User, users_table) <sqlalchemy.orm.mapper.Mapper object at 0x...>
The mapper()
function creates a new Mapper
object and stores it away for future reference. It also instruments the attributes on our User
class, corresponding to the users_table
table. The id
, name
, fullname
, and password
columns in our users_table
are now instrumented upon our User
class, meaning it will keep track of all changes to these attributes, and can save and load their values to/from the database. Lets create our first user, 'Ed Jones', and ensure that the object has all three of these attributes:
>>> ed_user = User('ed', 'Ed Jones', 'edspassword') >>> ed_user.name 'ed' >>> ed_user.password 'edspassword' >>> str(ed_user.id) 'None'
What was that last id
attribute? That was placed there by the Mapper
, to track the value of the id
column in the users_table
. Since our User
doesn't exist in the database, it's id is None
. When we save the object, it will get populated automatically with its new id.
We're now ready to start talking to the database. The ORM's "handle" to the database is the Session
. When we first set up the application, at the same level as our create_engine()
statement, we define a second object called Session
(or whatever you want to call it, create_session
, etc.) which is configured by the sessionmaker()
function. This function is configurational and need only be called once.
>>> from sqlalchemy.orm import sessionmaker >>> Session = sessionmaker(bind=engine, autoflush=True, transactional=True)
In the case where your application does not yet have an Engine
when you define your module-level objects, just set it up like this:
>>> Session = sessionmaker(autoflush=True, transactional=True)
Later, when you create your engine with create_engine()
, connect it to the Session
using configure()
:
>>> Session.configure(bind=engine) # once engine is available
This Session
class will create new Session
objects which are bound to our database and have the transactional characteristics we've configured. Whenever you need to have a conversation with the database, you instantiate a Session
:
>>> session = Session()
The above Session
is associated with our SQLite engine
, but it hasn't opened any connections yet. When it's first used, it retrieves a connection from a pool of connections maintained by the engine
, and holds onto it until we commit all changes and/or close the session object. Because we configured transactional=True
, theres also a transaction in progress (one notable exception to this is MySQL, when you use its default table style of MyISAM). There's options available to modify this behavior but we'll go with this straightforward version to start.
So saving our User
is as easy as issuing save()
:
>>> session.save(ed_user)
But you'll notice nothing has happened yet. Well, lets pretend something did, and try to query for our user. This is done using the query()
method on Session
. We create a new query representing the set of all User
objects first. Then we narrow the results by "filtering" down to the user we want; that is, the user whose name
attribute is "ed"
. Finally we call first()
which tells Query
, "we'd like the first result in this list".
sql>>> session.query(User).filter_by(name='ed').first()
<User('ed','Ed Jones', 'edspassword')>
And we get back our new user. If you view the generated SQL, you'll see that the Session
issued an INSERT
statement before querying. The Session
stores whatever you put into it in memory, and at certain points it issues a flush, which issues SQL to the database to store all pending new objects and changes to existing objects. You can manually invoke the flush operation using flush()
; however when the Session
is configured to autoflush
, its usually not needed.
OK, let's do some more operations. We'll create and save three more users:
>>> session.save(User('wendy', 'Wendy Williams', 'foobar')) >>> session.save(User('mary', 'Mary Contrary', 'xxg527')) >>> session.save(User('fred', 'Fred Flinstone', 'blah'))
Also, Ed has already decided his password isn't too secure, so lets change it:
>>> ed_user.password = 'f8s7ccs'
Then we'll permanently store everything thats been changed and added to the database. We do this via commit()
:
sql>>> session.commit()
commit()
flushes whatever remaining changes remain to the database, and commits the transaction. The connection resources referenced by the session are now returned to the connection pool. Subsequent operations with this session will occur in a new transaction, which will again re-acquire connection resources when first needed.
If we look at Ed's id
attribute, which earlier was None
, it now has a value:
>>> ed_user.id 1
After each INSERT
operation, the Session
assigns all newly generated ids and column defaults to the mapped object instance. For column defaults which are database-generated and are not part of the table's primary key, they'll be loaded when you first reference the attribute on the instance.
One crucial thing to note about the Session
is that each object instance is cached within the Session, based on its primary key identitifer. The reason for this cache is not as much for performance as it is for maintaining an identity map of instances. This map guarantees that whenever you work with a particular User
object in a session, you always get the same instance back. As below, reloading Ed gives us the same instance back:
sql>>> ed_user is session.query(User).filter_by(name='ed').one()
True
The get()
method, which queries based on primary key, will not issue any SQL to the database if the given key is already present:
>>> ed_user is session.query(User).get(ed_user.id) True
A whirlwind tour through querying.
A Query
is created from the Session
, relative to a particular class we wish to load.
>>> query = session.query(User)
Once we have a query, we can start loading objects. The Query object, when first created, represents all the instances of its main class. You can iterate through it directly:
sql>>> for user in session.query(User): ... print user.name
ed wendy mary fred
...and the SQL will be issued at the point where the query is evaluated as a list. If you apply array slices before iterating, LIMIT and OFFSET are applied to the query:
sql>>> for u in session.query(User)[1:3]: ... print u
<User('wendy','Wendy Williams', 'foobar')> <User('mary','Mary Contrary', 'xxg527')>
Narrowing the results down is accomplished either with filter_by()
, which uses keyword arguments:
sql>>> for user in session.query(User).filter_by(name='ed', fullname='Ed Jones'): ... print user
<User('ed','Ed Jones', 'f8s7ccs')>
...or filter()
, which uses SQL expression language constructs. These allow you to use regular Python operators with the class-level attributes on your mapped class:
sql>>> for user in session.query(User).filter(User.name=='ed'): ... print user
<User('ed','Ed Jones', 'f8s7ccs')>
You can also use the Column
constructs attached to the users_table
object to construct SQL expressions:
sql>>> for user in session.query(User).filter(users_table.c.name=='ed'): ... print user
<User('ed','Ed Jones', 'f8s7ccs')>
Most common SQL operators are available, such as LIKE
:
sql>>> session.query(User).filter(User.name.like('%ed'))[1]
<User('fred','Fred Flinstone', 'blah')>
Note above our array index of 1
placed the appropriate LIMIT/OFFSET and returned a scalar result immediately.
The all()
, one()
, and first()
methods immediately issue SQL without using an iterative context or array index. all()
returns a list:
>>> query = session.query(User).filter(User.name.like('%ed')) sql>>> query.all()
[<User('ed','Ed Jones', 'f8s7ccs')>, <User('fred','Fred Flinstone', 'blah')>]
first()
applies a limit of one and returns the first result as a scalar:
sql>>> query.first()
<User('ed','Ed Jones', 'f8s7ccs')>
and one()
, applies a limit of two, and if not exactly one row returned (no more, no less), raises an error:
sql>>> try: ... user = query.one() ... except Exception, e: ... print e
Multiple rows returned for one()
All Query
methods that don't return a result instead return a new Query
object, with modifications applied. Therefore you can call many query methods successively to build up the criterion you want:
sql>>> session.query(User).filter(User.id<2).filter_by(name='ed').\ ... filter(User.fullname=='Ed Jones').all()
[<User('ed','Ed Jones', 'f8s7ccs')>]
If you need to use other conjunctions besides AND
, all SQL conjunctions are available explicitly within expressions, such as and_()
and or_()
, when using filter()
:
>>> from sqlalchemy import and_, or_ sql>>> session.query(User).filter( ... and_(User.id<224, or_(User.name=='ed', User.name=='wendy')) ... ).all()
[<User('ed','Ed Jones', 'f8s7ccs')>, <User('wendy','Wendy Williams', 'foobar')>]
You also have full ability to use literal strings to construct SQL. For a single criterion, use a string with filter()
:
sql>>> for user in session.query(User).filter("id<224").all(): ... print user.name
ed wendy mary fred
Bind parameters can be specified with string-based SQL, using a colon. To specify the values, use the params()
method:
sql>>> session.query(User).filter("id<:value and name=:name").\ ... params(value=224, name='fred').one()
<User('fred','Fred Flinstone', 'blah')>
Note that when we use constructed SQL expressions, bind parameters are generated for us automatically; we don't need to worry about them.
To use an entirely string-based statement, using from_statement()
; just ensure that the columns clause of the statement contains the column names normally used by the mapper (below illustrated using an asterisk):
sql>>> session.query(User).from_statement("SELECT * FROM users where name=:name").params(name='ed').all()
[<User('ed','Ed Jones', 'f8s7ccs')>]
from_statement()
can also accomodate full select()
constructs. These are described in the SQL Expression Language Tutorial:
>>> from sqlalchemy import select, func sql>>> session.query(User).from_statement( ... select( ... [users_table], ... select([func.max(users_table.c.name)]).label('maxuser')==users_table.c.name) ... ).all()
[<User('wendy','Wendy Williams', 'foobar')>]
There's also a way to combine scalar results with objects, using add_column()
. This is often used for functions and aggregates. When add_column()
(or its cousin add_entity()
, described later) is used, tuples are returned:
sql>>> for r in session.query(User).\ ... add_column(select([func.max(users_table.c.name)]).label('maxuser')): ... print r
(<User('ed','Ed Jones', 'f8s7ccs')>, u'wendy') (<User('wendy','Wendy Williams', 'foobar')>, u'wendy') (<User('mary','Mary Contrary', 'xxg527')>, u'wendy') (<User('fred','Fred Flinstone', 'blah')>, u'wendy')
We've spent a lot of time dealing with just one class, and one table. Let's now look at how SQLAlchemy deals with two tables, which have a relationship to each other. Let's say that the users in our system also can store any number of email addresses associated with their username. This implies a basic one to many association from the users_table
to a new table which stores email addresess, which we will call addresses
. We will also create a relationship between this new table to the users table, using a ForeignKey
:
>>> from sqlalchemy import ForeignKey >>> addresses_table = Table('addresses', metadata, ... Column('id', Integer, primary_key=True), ... Column('email_address', String(100), nullable=False), ... Column('user_id', Integer, ForeignKey('users.id')))
Another call to create_all()
will skip over our users
table and build just the new addresses
table:
sql>>> metadata.create_all(engine)
For our ORM setup, we're going to start all over again. We will first close out our Session
and clear all Mapper
objects:
>>> from sqlalchemy.orm import clear_mappers >>> session.close() >>> clear_mappers()
Our User
class, still around, reverts to being just a plain old class. Lets create an Address
class to represent a user's email address:
>>> class Address(object): ... def __init__(self, email_address): ... self.email_address = email_address ... ... def __repr__(self): ... return "<Address(%r)>" % self.email_address
Now comes the fun part. We define a mapper for each class, and associate them using a function called relation()
. We can define each mapper in any order we want:
>>> from sqlalchemy.orm import relation >>> mapper(User, users_table, properties={ ... 'addresses':relation(Address, backref='user') ... }) <sqlalchemy.orm.mapper.Mapper object at 0x...> >>> mapper(Address, addresses_table) <sqlalchemy.orm.mapper.Mapper object at 0x...>
Above, the new thing we see is that User
has defined a relation named addresses
, which will reference a list of Address
objects. How does it know it's a list ? SQLAlchemy figures it out for you, based on the foreign key relationship between users_table
and addresses_table
.
Now when we create a User
, it automatically has this collection present:
>>> jack = User('jack', 'Jack Bean', 'gjffdd') >>> jack.addresses []
We are free to add Address
objects, and the session
will take care of everything for us.
>>> jack.addresses.append(Address(email_address='jack@google.com')) >>> jack.addresses.append(Address(email_address='j25@yahoo.com'))
Before we save into the Session
, lets examine one other thing that's happened here. The addresses
collection is present on our User
because we added a relation()
with that name. But also within the relation()
function is the keyword backref
. This keyword indicates that we wish to make a bi-directional relationship. What this basically means is that not only did we generate a one-to-many relationship called addresses
on the User
class, we also generated a many-to-one relationship on the Address
class. This relationship is self-updating, without any data being flushed to the database, as we can see on one of Jack's addresses:
>>> jack.addresses[1] <Address('j25@yahoo.com')> >>> jack.addresses[1].user <User('jack','Jack Bean', 'gjffdd')>
Let's save into the session, then close out the session and create a new one...so that we can see how Jack
and his email addresses come back to us:
>>> session.save(jack) sql>>> session.commit()
>>> session = Session()
Querying for Jack, we get just Jack back. No SQL is yet issued for for Jack's addresses:
sql>>> jack = session.query(User).filter_by(name='jack').one()
>>> jack <User(u'jack',u'Jack Bean', u'gjffdd')>
Let's look at the addresses
collection. Watch the SQL:
sql>>> jack.addresses
[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
When we accessed the addresses
collection, SQL was suddenly issued. This is an example of a lazy loading relation.
If you want to reduce the number of queries (dramatically, in many cases), we can apply an eager load to the query operation. We clear out the session to ensure that a full reload occurs:
>>> session.clear()
Then apply an option to the query, indicating that we'd like addresses
to load "eagerly". SQLAlchemy then constructs a join between the users
and addresses
tables:
>>> from sqlalchemy.orm import eagerload sql>>> jack = session.query(User).options(eagerload('addresses')).filter_by(name='jack').one()
>>> jack <User(u'jack',u'Jack Bean', u'gjffdd')> >>> jack.addresses [<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
If you think that query is elaborate, it is ! But SQLAlchemy is just getting started. Note that when using eager loading, nothing changes as far as the ultimate results returned. The "loading strategy", as its called, is designed to be completely transparent in all cases, and is for optimization purposes only. Any query criterion you use to load objects, including ordering, limiting, other joins, etc., should return identical results regardless of the combination of lazily- and eagerly- loaded relationships present.
back to section topWhich brings us to the next big topic. What if we want to create joins that do change the results ? For that, another Query
tornado is coming....
One way to join two tables together is just to compose a SQL expression. Below we make one up using the id
and user_id
attributes on our mapped classes:
sql>>> session.query(User).filter(User.id==Address.user_id).\ ... filter(Address.email_address=='jack@google.com').all()
[<User(u'jack',u'Jack Bean', u'gjffdd')>]
Or we can make a real JOIN construct; below we use the join()
function available on Table
to create a Join
object, then tell the Query
to use it as our FROM clause:
sql>>> session.query(User).select_from(users_table.join(addresses_table)).\ ... filter(Address.email_address=='jack@google.com').all()
[<User(u'jack',u'Jack Bean', u'gjffdd')>]
Note that the join()
construct has no problem figuring out the correct join condition between users_table
and addresses_table
..the ForeignKey
we constructed says it all.
The easiest way to join is automatically, using the join()
method on Query
. Just give this method the path from A to B, using the name of a mapped relationship directly:
sql>>> session.query(User).join('addresses').\ ... filter(Address.email_address=='jack@google.com').all()
[<User(u'jack',u'Jack Bean', u'gjffdd')>]
By "A to B", we mean a single relation name or a path of relations. In our case we only have User->addresses->Address
configured, but if we had a setup like A->bars->B->bats->C->widgets->D
, a join along all four entities would look like:
session.query(Foo).join(['bars', 'bats', 'widgets']).filter(...)
Each time join()
is called on Query
, the joinpoint of the query is moved to be that of the endpoint of the join. As above, when we joined from users_table
to addresses_table
, all subsequent criterion used by filter_by()
are against the addresses
table. When you join()
again, the joinpoint starts back from the root. We can also backtrack to the beginning explicitly using reset_joinpoint()
. This instruction will place the joinpoint back at the root users
table, where subsequent filter_by()
criterion are again against users
:
sql>>> session.query(User).join('addresses').\ ... filter_by(email_address='jack@google.com').\ ... reset_joinpoint().filter_by(name='jack').all()
[<User(u'jack',u'Jack Bean', u'gjffdd')>]
In all cases, we can get the User
and the matching Address
objects back at the same time, by telling the session we want both. This returns the results as a list of tuples:
sql>>> session.query(User).add_entity(Address).join('addresses').\ ... filter(Address.email_address=='jack@google.com').all()
[(<User(u'jack',u'Jack Bean', u'gjffdd')>, <Address(u'jack@google.com')>)]
Another common scenario is the need to join on the same table more than once. For example, if we want to find a User
who has two distinct email addresses, both jack@google.com
as well as j25@yahoo.com
, we need to join to the Addresses
table twice. SQLAlchemy does provide Alias
objects which can accomplish this; but far easier is just to tell join()
to alias for you:
sql>>> session.query(User).\ ... join('addresses', aliased=True).filter(Address.email_address=='jack@google.com').\ ... join('addresses', aliased=True).filter(Address.email_address=='j25@yahoo.com').all()
[<User(u'jack',u'Jack Bean', u'gjffdd')>]
The key thing which occured above is that our SQL criterion were aliased as appropriate corresponding to the alias generated in the most recent join()
call.
Let's try to delete jack
and see how that goes. We'll mark as deleted in the session, then we'll issue a count
query to see that no rows remain:
>>> session.delete(jack) sql>>> session.query(User).filter_by(name='jack').count()
0
So far, so good. How about Jack's Address
objects ?
sql>>> session.query(Address).filter( ... Address.email_address.in_('jack@google.com', 'j25@yahoo.com') ... ).count()
2
Uh oh, they're still there ! Anaylzing the flush SQL, we can see that the user_id
column of each addresss was set to NULL, but the rows weren't deleted. SQLAlchemy doesn't assume that deletes cascade, you have to tell it so.
So let's rollback our work, and start fresh with new mappers that express the relationship the way we want:
sql>>> session.rollback() # roll back the transaction
>>> session.clear() # clear the session >>> clear_mappers() # clear mappers
We need to tell the addresses
relation on User
that we'd like session.delete() operations to cascade down to the child Address
objects. Further, we also want Address
objects which get detached from their parent User
, whether or not the parent is deleted, to be deleted. For these behaviors we use two cascade options delete
and delete-orphan
, using the string-based cascade
option to the relation()
function:
>>> mapper(User, users_table, properties={ ... 'addresses':relation(Address, backref='user', cascade="all, delete, delete-orphan") ... }) <sqlalchemy.orm.mapper.Mapper object at 0x...> >>> mapper(Address, addresses_table) <sqlalchemy.orm.mapper.Mapper object at 0x...>
Now when we load Jack, removing an address from his addresses
collection will result in that Address
being deleted:
# load Jack by primary key sql>>> jack = session.query(User).get(jack.id)
# remove one Address (lazy load fires off) sql>>> del jack.addresses[1]
# only one address remains sql>>> session.query(Address).filter( ... Address.email_address.in_('jack@google.com', 'j25@yahoo.com') ... ).count()
1
Deleting Jack will delete both Jack and his remaining Address
:
>>> session.delete(jack) sql>>> session.commit()
sql>>> session.query(User).filter_by(name='jack').count()
0 sql>>> session.query(Address).filter( ... Address.email_address.in_('jack@google.com', 'j25@yahoo.com') ... ).count()
0
We're moving into the bonus round here, but lets show off a many-to-many relationship. We'll sneak in some other features too, just to take a tour. We'll make our application a blog application, where users can write BlogPost
s, which have Keywords
associated with them.
First some new tables:
>>> post_table = Table('posts', metadata, ... Column('id', Integer, primary_key=True), ... Column('user_id', Integer, ForeignKey('users.id')), ... Column('headline', String(255), nullable=False), ... Column('body', String) ... ) >>> post_keywords = Table('post_keywords', metadata, ... Column('post_id', Integer, ForeignKey('posts.id')), ... Column('keyword_id', Integer, ForeignKey('keywords.id'))) >>> keywords_table = Table('keywords', metadata, ... Column('id', Integer, primary_key=True), ... Column('keyword', String(50), nullable=False, unique=True)) sql>>> metadata.create_all(engine)
Then some classes:
>>> class BlogPost(object): ... def __init__(self, headline, body, author): ... self.author = author ... self.headline = headline ... self.body = body ... def __repr__(self): ... return "BlogPost(%r, %r, %r)" % (self.headline, self.body, self.author) >>> class Keyword(object): ... def __init__(self, keyword): ... self.keyword = keyword
And the mappers. BlogPost
will reference User
via its author
attribute:
>>> from sqlalchemy.orm import backref >>> mapper(Keyword, keywords_table) <sqlalchemy.orm.mapper.Mapper object at 0x...> >>> mapper(BlogPost, post_table, properties={ ... 'author':relation(User, backref=backref('posts', lazy='dynamic')), ... 'keywords':relation(Keyword, secondary=post_keywords) ... }) <sqlalchemy.orm.mapper.Mapper object at 0x...>
There's three new things in the above mapper:
User
relation has a backref, like we've used before, except this time it references a function called backref()
. This function is used when you'd like to specify keyword options for the backwards relationship.
backref()
is lazy="dynamic"
. This sets a default loader strategy on the attribute, in this case a special strategy that allows partial loading of results.
keywords
relation uses a keyword argument secondary
to indicate the association table for the many to many relationship from BlogPost
to Keyword
.
Usage is not too different from what we've been doing. Let's give Wendy some blog posts:
sql>>> wendy = session.query(User).filter_by(name='wendy').one()
>>> post = BlogPost("Wendy's Blog Post", "This is a test", wendy) >>> session.save(post)
We're storing keywords uniquely in the database, but we know that we don't have any yet, so we can just create them:
>>> post.keywords.append(Keyword('wendy')) >>> post.keywords.append(Keyword('firstpost'))
We can now look up all blog posts with the keyword 'firstpost'. We'll use a special collection operator any
to locate "blog posts where any of its keywords has the keyword string 'firstpost'":
sql>>> session.query(BlogPost).filter(BlogPost.keywords.any(keyword='firstpost')).all()
[BlogPost("Wendy's Blog Post", 'This is a test', <User(u'wendy',u'Wendy Williams', u'foobar')>)]
If we want to look up just Wendy's posts, we can tell the query to narrow down to her as a parent:
sql>>> session.query(BlogPost).with_parent(wendy).\ ... filter(BlogPost.keywords.any(keyword='firstpost')).all()
[BlogPost("Wendy's Blog Post", 'This is a test', <User(u'wendy',u'Wendy Williams', u'foobar')>)]
Or we can use Wendy's own posts
relation, which is a "dynamic" relation, to query straight from there:
sql>>> wendy.posts.filter(BlogPost.keywords.any(keyword='firstpost')).all()
[BlogPost("Wendy's Blog Post", 'This is a test', <User(u'wendy',u'Wendy Williams', u'foobar')>)]
Generated Documentation for Query: class Query(object)
ORM Generated Docs: module sqlalchemy.orm
Further information on mapping setups are in Mapper Configuration.
Further information on working with Sessions: Using the Session.
back to section top