Navigation

MongoDB is a high-performance schemaless database that allows you to store and
retrieve JSON-like documents. MongoDB stores these documents in collections,
which are analogous to SQL tables. Because MongoDB is schemaless, there are no
guarantees given to the database client of the format of the data that may be
returned from a query; you can put any kind of document into a collection that
you want.

While this dynamic behavior is handy in a rapid development environment where you
might delete and re-create the database many times a day, it starts to be a
problem when you need to make guarantees of the type of data in a collection
(because you code depends on it). The goal of Ming is to allow you to specify
the schema for your data in Python code and then develop in confidence, knowing
the format of data you get from a query.

Ming manages your connection to the MongoBD database using an object known as a
Datastore. The DataStore is actually just a
thin wrapper around a pymongo Database object.
(The actual Database object can always be accessed via the db property of the
DataStore instance. For this tutorial, we will be using a single, global DataStore:

Note that if you’re connecting to MongoDB on the default host (localhost) and
port (27017), you can just provide the database name here:

bind=create_datastore('tutorial')

Ming, like many object-relational mappers (ORMs), revolves around the idea of
model classes. In order to create these classes, we need a way of connecting
them to the datastore. Ming uses an object known as a Session to do this. For
this tutorial, we will be using a single global Session:

frommingimportSessionsession=Session(bind)

We can also configure() ming with a set of urls, using
a config dict of “ming.*” keys.
The first part must be ‘ming’, the second part is a session name, and the third
parts are used as parameters to construct a Datastore object.

Mongo-in-Memory

Ming also provides a “mongo in memory” implementation, which is non-persistent,
in Python, and possibly much faster than MongoDB for very small data sets, as
you might use in testing.
To use it, just change the connection url to mim://

config={'ming.example.uri':'mongodb://localhost:27017/tutorial'}ming.configure(**config)# and later access the named session with:Session.by_name('example')

There are two styles to define your models in Ming, declarative and
imperative, and both styles are availabe both at the document level (which
this tutorial covers) and the ODM layer (covered by the ODM tutorial). Which
you end up using is mostly a matter of personal style. The declarative style
actually predated the imperative style, and the main author of Ming uses both
styles interchangably in application programming based on which seems more
convenient for the task at hand.

Due to the history of the declarative model preceding the imperative model,
you may notice that the documentation is skewed towards the declarative
model. Keep in mind that most anything you can do declaratively, you can also
do imperatively in Ming. Also, if you get a chance, feel free to submit
documentation bugs to the Ming project at SourceForge.

Now that that boilerplate is out of the way, we can actually start writing our
models. We will start with a model representing a WikiPage. We can do that in
“imperative” mode as follows:

Here, rather than use the collection() function, we are defining the class
directly, grouping some of the metadata used by ming into a __mongometa__ class
in order to reduce namespace conflicts. Note that we don’t have to provide the
name of our various Field instances as strings here since they already have
names implied by their names as class attributes. If we want to map a document field
to a different class attribute, we can do so using the following syntax:

_renamed_field=Field('renamed_field',str)

This is sometimes useful for “privatizing” document members that we wish to wrap
in @property decorators or other access controls.

Methods

We can add our own methods to the WikiPage class, too. However, the make()
method is reserved for object construction and validation See the Bad
Data section.

As you can see, Ming documents can be accessed
either using dictionary-style lookups (page[‘title’]) or attribute-style
lookups (page.title).
In fact, all Ming documents are dict subclasses, so all the standard methods on
Python dict objects are available.

In order to actually interact with the database, Ming provides a standard
attribute .m, short for Manager, on each mapped
class.
In order to save the
document we just created to the database, for instance, we would simply type:

When the page was saved to the database, the database assigned a unique _id
attribute. (If we had wished to specify our own _id, we could have also done
that.) Now, let’s query the database and make sure that the document actually
got saved:

Ming provides an .m.find() method on class managers that works just like the .find() method on collection
objects in pymongo and is used for performing queries.
The result of a query is a Python iterator that wraps a pymongo cursor,
converting each result to a ming.Document before
yielding it.
Like SQLAlchemy, we provide several convenice methods on query results
(Cursor):

one()

Retrieve a single result from a query. Raises an exception if the query
contains either zero or more than one result.

first()

Retrieve the first result from a query. If there are no results, return
None.

all()

Retrieve all results from a query, storing them in a Python list.

count()

Returns the number of results in a query

limit(limit)

Restricts the cursor to only return limit results

skip(skip)

Skips ahead skip results in the cursor (similar to a SQL OFFSET clause)

sort(*args, **kwargs)

Sorts the underlying pymongo cursor using the same semantics as the
pymongo.Cursor.sort() method

Ming also provides a convenience method .m.get(**kwargs) which is equivalent to
.m.find(kwargs).first() for simple queries that are expected to return one
result.
Some examples:

Ming documents are validated at certain points in their life cycle. (Validation
is where the schema is enforced on the document.) Generally, schema validation
occurs when saving the document to the database or when loading it from the
database. Additionally, validation is performed when the document is created
using the .make() method.

So what about the schema? So far, we haven’t seen any evidence that Ming is
doing anything with the schema information at all. Well, the first way that Ming
helps us is by making sure we don’t specify values for properties that are not
defined in the object:

Up till now, we have generally been defining schema items as native Python
types. This is a convenient shortcut provided by Ming to reduce your
finger-typing. Sometimes, however, you’ll need to directly specify the actual
validator used. These validators are defined in the ming.schema module.

Ming, like MongoDB, allows for documents to be arbitrarily nested.
For instance, we might want to keep a metadata property on our WikiPage that
kept tag and category information.
To do this, we just need to add a little more complex schema.
Add the following line to the WikiPage definition:

One of the most irritating parts of maintaining an application for a while is the
need to do data migrations from one version of the schema to another.
While Ming can’t completely remove the pain of migrations, it does seek to make migrations
as simple as possible.

Suppose we decided that we didn’t want the metadata property; we’d like to
“promote” the categories and tags properties to be top-level attributes of
the WikiPage.
We might write our new schema as follows:

What we need now is a migration. Luckily, Ming makes migrations manageable.
All we need to do is include the previous schema, a migration function in our
__mongometa__ object, and a way to
force the migration. For the ‘forcing’ part, we’ll add a version field to the new
schema:

And that’s it. Migrations are performed lazily as the objects are loaded
from the database. Note that we can make the OldWikiPage a version_of and
EvenOlderWikiPage and the migration will automatically migrate each object to
the latest version. If you wish to migrate all the objects in a collection, just
do the following: