Category Archives: python

I had to work with JSON data where many properties were optional. In other words, a dictionary at any level could have missing keys, and the corresponding value should be treated as missing. For example, for input data {"k1": {"k2": {"k3": 1}}}, a nested look-up of "k1", "m2", "n3" should result in a missing value.

The builtin dictionary get method would work if keys were only missing at the lowest level:

Python

1

2

3

4

5

importjson

json_str='{"k1": {"k2": {"k3": 1}}}'

json_obj=json.loads(json_str)

value=json_obj["k1"]["k2"].get("n3",None)# produces None as desired

value=json_obj.get("k1").get("m2").get("n3")# raises AttributeError

We couldn’t use if statement or try/except since most of the lookups occur inside expressions. The straightforward approach of checking for the presence of the key at each level is quite verbose:

Python

1

2

value=Noneif"k1"notinjson_obj elseNoneif"m2"notinjson_obj["k1"]\

elsejson_obj["k1"]["m2"].get("n3")

Since values are known to be dictionaries, and empty dictionaries won’t have the key we’re looking up, we could improve it by using logical and short-circuit:

This is fine, if it only needs to be written in a couple of places. But if it’s a common theme for the data, this code is still unattractive and error-prone as it’s repeated over and over.

Replacing the dictionaries with defaultdict is no good: it would not work if nesting depth is not fixed and would add expensive noise to the object in the form of empty dictionaries with every lookup attempt.

We could flatten the JSON structure:

Python

1

2

3

4

5

6

importjson

json_str={"k1":{"k2":{"k3":1}}}

json_obj=json.loads(json_str)

flat_obj=flatten(json_obj)# we'll need to write function flatten

flat_obj[("k1","k2","n3"),None)# ok

value=flat_obj[("k1","m2","n3"),None)# ok

However, this means we can no longer pass around intermediate dictionaries and lists. If we only need the leaves and don’t mind the overhead of flattening each JSON object, it’s an acceptable solution.

A simpler solution that doesn’t torture the JSON structure and adds no overhead is to follow the example of unittest.mock.MagicMock that silently accepts every request without actually doing any real work:

For our immediate use case, we only need get and __getitem__ (for list index lookup), but I also added __getattr__ method for use below with member access. Also, we really only need a single instance of this class, but it’s safer to define __eq__: this way, we don’t have to worry about accidentally creating and comparing multiple instances.

If we want even more syntactic sugar, so we can write json_obj["k1"]["m2"]["n3"], we could convert all the dictionaries inside the json object to instances of a custom dict subclass. At this point, we’re really creating a mini-DSL, so we might as well allow attribute lookup in dictionaries: json_obj.k1.m2.n3:

Python

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

classOptionalKeyDict(dict):

def__missing__(self,key):

returnNA

def__getattr__(self,key):

returnself[key]

defconvert_dict(json_obj,cls):

ifisinstance(json_obj,list):

return[convert_dict(item,cls)foritem injson_obj]

elifisinstance(json_obj,dict):

json_obj=cls(json_obj)

fork,vinjson_obj.items():

json_obj[k]=convert_dict(v,cls)

returnjson_obj

else:

# primitive

returnjson_obj

json_custom=convert_dict(json_obj,OptionalKeyDict)

assertjson_custom['k1']['m2']['n3']==NA

assertjson_custom['k1']['k2']['k3']==1

assertjson_custom.k1.m2.n3==NA

assertjson_custom.k1.k2.k3==1

json_custom=convert_dict({"a":[{"b":1},{"d":3}]})

assertjson_custom.a[0].b==1

assertjson_custom.a[1].b==NA

This is a lot less intrusive than flattening the json structure, but it still adds a modest runtime overhead: both at the initial conversion and on subsequent lookups (only if the key is missing).

It wouldn’t be hard to return NA on non-existent indexes in lists, but if that’s the correct semantics in our domain, we probably should be using dictionaries instead of lists in the first place.

Update: a couple good solutions were suggested in response to this post on Reddit. One is:

Python

1

json_obj.get("k1",{}).get("m2",{}).get("n3")

I think it works best if there are no lists; with lists, the switching between {} and [] might become somewhat easy to mess up.

The other is to define a function that can be used like this: json_get(json_obj, "k1", "k2", "n3"). It’s good if every property is optional; it won’t be able to express that some properties are required, such as in json_obj.get("k1", NA)["m2"].get("m3", NA).

Generic Class

In this part, we will implement graph data structure using classes and interfaces, and discuss when it’s worth overruling type hints.

We need to make a few design choices.

First, should we require the user to provide unique and hashable node objects, or should we accept any values and wrap them into class Node ourselves? Let’s do the safer thing and wrap values into Node instances: this protects us in case the values could become non-unique or non-hashable in the future.1

Second, should we store adjacency sets inside or outside Node? I have a slight personal preference for nodes knowing their adjacency information, since it occasionally allows us to use one function argument (a node) rather than two (a node and a graph).2

Third, should we separate Graph and Node interfaces from their implementation? For example, we can have abstract base classes IGraph and IMutableGraph from which all concrete implementations would inherit. Apart from making the design cleaner, it should help generate more precise type hints (e.g., a traversal function may work on any object that implements IGraph, since it doesn’t need to mutate anything). This is a good idea, but let’s do without interfaces for now; we’ll add them later.

We provided type hints for instance attributes in the class definition body of Node and Graph; this is a good practice, but it is also acceptable to annotate their types inside __init__ (or not at all if mypy can infer them)

Type System Limitations

Now, just like in Part I, let’s again add reverse adjacency information to Graph in order to make iteration through incoming edges faster. The new class ReversibleGraph should share a big part of its implementation with Graph, so it would make sense to derive it from Graph. We also want to reuse all of the global functions since they should work without change for both implementations.

However, if we follow this plan, our code will not type check. There are two reasons for that.

Liskov Substitution Principle

If X derives from Y, it should *always* be safe to substitute an instance of X in place of an instance of Y.3

Therefore, when mypy sees that ReversibleGraph inherits from Graph, it analyzes the code to see whether it is indeed safe to use a ReversibleGraph in place of Graph. It turns out that it’s not safe, and so the type check fails.

To see why it’s not always safe, consider this function:

Python

1

2

3

deff(g:Graph[int],node:Node[int])->None:

ifnode.value>0:

g.remove_node(node)

If we try to call it with g of type ReversibleGraph and node of type Node, it would fail in runtime when the ReversibleGraph.remove_node tries to access the non-existent node.backward attribute.4

To summarize, mypy thinks inheritance is serious business, and that it represents a relationship with some strict guarantees (LSP). We, on the other hand, wanted to use inheritance just to help with code reuse. A feature is in the works that would let the programmer turn LSP on or off as desired; but for now, we have to find another solution.

We could satisfy mypy by using composition instead of inheritance (that is, by wrapping Graph instance inside ReversibleGraph instance). But let’s not do that because this will make the code less logical and more verbose. Instead, let’s simply mark the lines about which mypy complains with type: ignore. Those directives are very precise: they suppress error messages for that line, but the type checker still parses those lines, and uses information it learned from them to type check the rest of the program.

Types Outside the Type System

Since we were careful enough to make Graph and ReversibleGraph support the same API, a function like read_graph should work without change for both. However, its type annotation is not obvious.

The first argument can be any class that supports the mutable graph API, that is IGraphMutable. The return type should be based on the value of the first argument; for example, if we call read_graph(ReversibleGraph, ...), the type checker should conclude that the return type is ReversibleGraph. (We can’t set return type to IGraph since that would prevent us from using the reverse adjacency included in ReversibleGraph.)

This would have worked if IGraph wasn’t generic; unfortunately, type variables cannot be bound by a generic type5. We also cannot provide type arguments to type variables. The above code would not even pass mypy’s syntax check.

The problem is simple: the types we’ve been trying to define do not exist in mypy’s type system. Such types require more sophisticated language implementation and more learning effort to use, and they are usually found only in the more advanced functional languages (such as Haskell and Scala).

In my opinion, the best solution here is to make class Node non-generic by changing the type of its value attribute to Any6. This means we no longer need a generic class for Graph and Node, and so we can use type variable to provide type hints for graph functions. Now is the time to add the interface classes as we discussed earlier; it’s easier to do so without dealing with generics.

Our final implementation even reuses the test functions:

Note that we didn’t bother trying to prevent assignment to attributes in the immutable interfaces; for example, even though nodes of IGraph is an (immutable) AbstractSet, users can assign a new container to nodes. While @property could have insured against such accidental assignment, it’s probably not worth the runtime cost.7

Undirected Graphs

Our API for a directed graph is so limited that it fits an undirected graph as is; we can always separate out the interfaces later, when the need arises.

Conceptually, an undirected graph is equivalent to a directed graph that satisfies two constraints:

each edge has a corresponding edge in the opposite direction

there are no loops (i.e., edges with the same tail and head)

Therefore, one way to implement an undirected is to reuse our (directed) Graph implementation. We just need to modify the add_edge and remove_edge methods to ensure those constraints are never violated:

It might seem wasteful to store each edge twice. However, we can’t really improve on that. Let’s say we define order on nodes, and store each edge only once, in the adjacency set of the lower-valued node. In this case, iterating through neighbors would be ridiculously slow since neighbors may end up stored on any node.

Conclusion

Class-based representation keeps the API clean and stable.

However, as we make our approach more general, type hints become more complex, and at some point we run into the limitations of python type system.

By its nature, static type checking provides insurance against certain bugs at the cost of constraints on the developer. Very powerful type systems (such as in Scala, Haskell, Idris) are extremely flexible; but they are also slightly harder to learn, debug, and implement. Simpler type systems (e.g., in Java, C#, C++, and python) are less flexible, and will occasionally get in the way of the developers.

I don’t recommend non-trivial refactoring just to satisfy the type checker, unless it also improves code quality. Often, it’s better to just overrule mypy using:

Introduction

In this part, I’ll show a few simple examples of graph implementation using python 3.6, and validate them with the static type checker, mypy.

Type hints (or type annotations) is a feature added to python 3 that allows programmers to include type information in their code. Type annotations should be validated by a third-party static type checker. Static types serve to provide early (“compile-time”) warnings about possible bugs; in addition, type hints often make the code easier to understand. Type hints are completely optional and have almost no impact on the run-time behavior of the program.1

Dictionary Representation

Let’s start with directed graphs. We will assign each node a unique id, using integers in range(n_nodes-1).

First, let’s get out of the way the comparison of adjacency matrix vs adjacency lists. An adjacency matrix representation uses a 2D boolean matrix (likely represented in python as a list of lists), where cell (i, j) indicates if there is an edge from i to j. An adjacency lists representation stores a collection of neighbors for each node.

Adjacency matrix uses O(n_nodes^2) space and takes O(n_nodes) time to iterate through the neighbors of a single node (a very common operation in graph algorithms). The corresponding costs for adjacency lists are O(n_edges) and O(n_degree). Therefore, the adjacency lists approach always wins, and its advantage is especially large for sparse graphs. In fact, adjacency matrix representation should only be used in a few very specific circumstances:

if the 2D array is already supplied from outside, and it’s not worth converting it to an adjacency lists

The obvious implementation choices for adjacency lists is a list or a set. A set is usually better because it offers O(1) lookup, insertion, and removal.3 The adjacency sets themselves can be stored in a list or in a dictionary; a dictionary is better because it allows O(1) node removal. 4

So we have our first implementation of a directed graph as a dictionary of sets:

For demonstration, I’ve added functions that convert between our graph representation and a very simple serialization format.

Of course, any additional information would have to be stored separately, for example in dictionaries indexed by node_id or by tuples (tail_id, head_id).

While this is a very simple and limited implementation, it’s quite usable in simple cases.

Digression: Graph Equality

You can skip this section if you’re not interested in graph comparisons.

Note how in the test_serialization, we cannot assert write_graph(read_graph(g)) == g: it will fail because the order of lines and of neighbors within each line may change after the two conversions, and also because of possible differences in whitespace. On the other hand, assert read_graph(write_graph(g)) == g works.

The behavior of the equality operator with our graphs is somewhat misleading: it does not check if the two objects represent equivalent (“isomorphic” using the mathematical term) graphs. For example, {0: {1}, 1: {}} != {0: {}, 1: {0}}, and yet the lhs and the rhs represent the equivalent graphs (two nodes connected by a single edge).

Our graph object is a nested structure of dictionaries and sets, with integer node ids at the bottom tier. As a result, two graph objects compare equal (using ==) if for each node id in one graph, the other graph has a node with the same id, and these two nodes have the same neighbor ids. This a much stricter rule than the mathematic equivalence. It is easy to confirm that this precisely the same as equivalence of labeled graphs, i.e. graphs where each node is tagged with an integer label.5

Since we set out to represent regular graphs rather than labeled graphs, this is somewhat unfortunate.6 We might consider disabling the comparison operator for our graphs to prevent subtle bugs due to misunderstanding of equality, but we cannot do that because our implementation uses built-in dict.

So we just have to be careful to remember what == does for graphs. And luckily, for the purposes of test_serialization, comparing labeled graphs is good enough: our conversion functions happen to preserve all the node ids (even though I didn’t think about this when I wrote the code).

Using Node Values as Node Ids

Sometimes, the node values are known to be unique and hashable. It is then tempting to just use them as node ids instead of storing values separately:

This code is slightly fragile because we have to remember to modify it if the values become non-unique in the future; also, ideally we should verify that the values provided to us are actually unique.

Note: I used generic types here. Generic types use one or several parameters (type variables, introduced with TypeVar) to represent a whole family of types. Putting generic types in the function signature is similar to declaring several overloaded functions, one for each possible value of the parameter, but with precisely the same body:

Python

1

2

3

4

T=TypeVar('T')

deff(x:T)->List[T]:

return[x,x]

is roughly equivalent to

Python

1

2

3

4

5

6

7

8

9

@overload

deff(x:int)->List[int]:

return[x,x]

@overload

deff(x:str)->List[str]:

return[x,x]

# and so on...

Except that by “several” I mean infinitely many, since parameter (type variable) T in this example can represent any of the infinitely many types that may be defined in the program.

A generic class, marked as such by deriving it from Generic[T] is similar to a generic function, except that the overloading happens based on the constructor arguments. Once the concrete type for each type variable is determined for a given class instance, it stays the same for all attributes and methods of that instance. If the constructor arguments are insufficient for mypy to figure out the concrete types, then mypy asks the user to add type annotation. In our case, x = Node(1) would be fine because mypy can figure out that the concrete type of T here is int.7x = Node() won’t tell mypy anything about T, so mypy requires type annotation, e.g. x: Node[int] = Node() or x = Node[int]().

Node Class

If we want the code to be safer, or if node values are not actually unique and hashable, and yet we still prefer to store node values together with the graph rather than elsewhere, we can just wrap node values inside a class (we can rely on the default user-defined class equality, which compares different instances as not equal):

With nodes as custom class objects, we can customize their behavior with methods. The only obvious addition I thought of is __repr__, which helps debugging. Be careful not to override __eq__ method, since the whole point of class Node is to ensure different nodes never compare equal (so that they are kept separate in the dictionary).

Note how both read_graph and write_graph functions became more complex. This is because we no longer store node ids in the graph object, instead referring directly to the Node objects. This only works in a live graph; in the serialized format, we still need to use node ids. As a result, read_graph and write_graph need to create a mapping between Node objects and node ids.8

Also note that the test became much more complex. read_graph(write_graph(g)) == g no longer holds because at the bottom of the nested collections that we use to represent the graph, we now have Node objects with identity-based equality rather than integers or strings with value-based comparison. Since a Node object will never compare equal to any other Node object, two different graphs won’t be equal.9 If we want to check even the simplistic “labeled graph” equality, we need to write our own function; and that’s what I chose to do.

The function labeled_graph_eq verifies whether two graphs are equal when viewed as labeled graphs, with the node labels given by the value attribute. Unlike in the previous examples, we cannot assume that labels are unique (that’s the main reason why we wrapped node values in a class to begin with). Handling non-unique labels is a bit tricky, and labeled_graph_eq mainly serves to help in unit tests, where we can make labels unique. Therefore, I decided to keep things simple and raise NotImplementedError when non-unique labels are detected. 10

Set Representation

Now that we have a custom class to represent nodes, we can even store the adjacency sets inside them. In that case, graph is no longer a dictionary, but just a set of nodes. Unfortunately, as we make this change, we will break our existing code such as write_graph and labeled_graph_eq:

I think this is a (very minor) improvement over the previous version because a node object is now sufficient to find all its neighbors (the graph is no longer needed). As a result, some graph functions (e.g., a BFS traversal) will need one less argument. Related to that, Node.__str__ / Node.__repr__ also have more information at their disposal (e.g., they could now report the node degree).

Note the instance attribute type annotation for adj inside class Node. This is telling mypy that Node objects have an instance attribute adj of the indicated type. This is necessary because mypy cannot infer the type of adj based on the assignment of an empty set (without this annotation, mypy will assume that adj has type Set[Any], which effectively disables part of the type checking).

Also, I had to use a string 'Set[Node[T]]' because class objects are not visible to python runtime in the body of their own class definition, and python runtime executes all type annotations. This problem is solved by using a forward reference, which is just a string that contains the definition you originally wanted to use.

Limitations

To recap, we considered several simple graph implementations:

– Graph is a dictionary with nodes as keys, and adjacency sets as values
(1) Nodes are integer ids (node values stored separately)
(2) Nodes are user-provided values (which have to be hashable and unique)
(3) Nodes are instances of a custom class, which wraps user-provided values
– Graph is a set of nodes
(4) Nodes are instances of a custom class which contains values and adjacency sets

In simple cases, these approaches work fine.

But let’s try to add a new feature to our graph.

Many graph algorithms need to iterate through the incoming edges of a given
node. In order to do this efficiently, we will keep track of the adjacent
nodes in the reverse direction.11

With implementations (1), (2), (3) we could change values of the dictionary
to namedtuples with forward and reverse adjacency sets. When adding or
deleting edges, we now need to update the reverse adjacency set; since we
can’t add a method to builtin dict, we will define a global functions to do
that.12

One problem is that we’re breaking the API of our graph: we’ll need to
replace graph[node] with graph[node].forward, graph[v].add(w) withadd_edge(graph, v, w), and graph[v].remove(w) with remove_edge(graph, v,
w).

Also, we cannot disable dict methods, so ifgraph[v].add(w) is used by accident, we will end up with a corrupt graph. Luckily, most
such errors will probably be caught by the type checker; but still leaving
many useless or potentially dangerous methods exposed is unattractive.

With implementation (4), we seem to have more flexibility since we could put
reverse adjacency data inside the node instances. It does buy us some
reduction in code breakage: we can keep the API for simple iterations
unchanged, so only graph mutations need to be rewritten. But it comes at a
cost: we no longer can rely on the type checker to catch bugs such asgraph[v].adj.add(w) without the matching graph[w].reverse_adj.add(v). In
fact, those errors won’t even cause an immediate runtime exception; they will
instead corrupt the graph object — a far more dangerous bug.

In summary, here are the problems with our current implementations:

API often breaks as we add new features

We cannot disable methods of bultin classes, so we expose many methods
that are not part of public API. Some of them may be dangerous (e.g.,
dictionary item assignment when we no longer want it to be used)

We cannot add new methods to builtin classes, so any functions that work
on the graph need to be global (even in python, it’s often better to organize
related functions together under a class).

If these concerns are relevant to us, for example if we are likely to enhance
the graph functionality over time, we should wrap the graph data structure in
a class. We’ll do so in part II.