I keep forgetting I can store references to the same set on different objects.

Like, I had a dictionary that mapped values to a set, but I also wanted the set's contents to be accessible by another object -- so I kept trying to think up a scheme to update the object every time this set changed. For some reason, I completely missed that I can just have the object refer to this set, thereby accessing it directly.

(I know that's kind of dangerous because you now have two places where you can modify the set, but in this context, I'm pretty sure it's okay)

I'm working with a class that wraps two dictionaries -- one that maps a unique value ("A") to a non-unique value ("b"), and one that maps that non-unique to a set containing all the unique values that are mapped to it. In other words, as one dictionary maps A -> b, another maps b -> {All As mapped to b}.

Does this have a bad smell? I mean, I see a use for it right now -- it solves a major problem and seems really, really useful. But should I be concerned that I'm basically mapping in two directions? Also, is there a name for this sort of mapping? Is it just a normal bijective mapping? It doesn't look like it, since (I think?) bijective mappings must have unique keys and values.

EDIT: Both keys and values are hashable; by 'non-unique', I only mean that A1 -> 5 and A2 -> 5 is valid. Also, maybe this would be describable as an 'invertible' dictionary?

http://mentalfloss.com/article/26316/why-does-my-gadget-say-its-december-31-1969 wrote:Unix is a computer operating system that, in one form or another, is used on most servers, workstations, and mobile devices. It was launched in November 1971 and, after some teething problems, the “epoch date” was set to the beginning of the decade, January 1, 1970. What this means is that time began for Unix at midnight on January 1, 1970 GMT. Time measurement units are counted from the epoch so that the date and time of events can be specified without question. If a time stamp is somehow reset to 0, the clock will display January 1, 1970.

So where does December 31 fit in? It’s because you live in the Western Hemisphere. When it’s midnight in Greenwich, England, it’s still December 31st in America, where users will see December 31, 1969—the day before Unix’s epoch.

I know it's probably been said, in the few years I missed to answer, but that's why it always resets to 1969. additionally, This question makes me think that all Unix systems, i.e.

(everything)

stops making more dates at that time, probably for memory reasons or something. no clue.

Sorry about the randomness of that.

When I'm DownAnd I'm So StressedI just turn it Aroundand get DessertsIf you fill a glass halfway, it's halfway full. If you empty half of a glass, it's half empty.

I'm working with a class that wraps two dictionaries -- one that maps a unique value ("A") to a non-unique value ("b"), and one that maps that non-unique to a set containing all the unique values that are mapped to it. In other words, as one dictionary maps A -> b, another maps b -> {All As mapped to b}.

Does this have a bad smell? I mean, I see a use for it right now -- it solves a major problem and seems really, really useful. But should I be concerned that I'm basically mapping in two directions? Also, is there a name for this sort of mapping? Is it just a normal bijective mapping? It doesn't look like it, since (I think?) bijective mappings must have unique keys and values.

EDIT: Both keys and values are hashable; by 'non-unique', I only mean that A1 -> 5 and A2 -> 5 is valid. Also, maybe this would be describable as an 'invertible' dictionary?

It's a Good Idea to mention which language you're using. IIRC, you use Python. Some languages may have better ways to handle that, but what you're doing is fine in Python. Although I guess you could wrap it in a class to make it cleaner. That way you're bundling the two dicts into one object which can have methods that let you manipulate both dicts in one operation.

I'm working with a class that wraps two dictionaries -- one that maps a unique value ("A") to a non-unique value ("b"), and one that maps that non-unique to a set containing all the unique values that are mapped to it. In other words, as one dictionary maps A -> b, another maps b -> {All As mapped to b}.

Does this have a bad smell? I mean, I see a use for it right now -- it solves a major problem and seems really, really useful. But should I be concerned that I'm basically mapping in two directions? Also, is there a name for this sort of mapping? Is it just a normal bijective mapping? It doesn't look like it, since (I think?) bijective mappings must have unique keys and values.

EDIT: Both keys and values are hashable; by 'non-unique', I only mean that A1 -> 5 and A2 -> 5 is valid. Also, maybe this would be describable as an 'invertible' dictionary?

It's a Good Idea to mention which language you're using. IIRC, you use Python. Some languages may have better ways to handle that, but what you're doing is fine in Python. Although I guess you could wrap it in a class to make it cleaner. That way you're bundling the two dicts into one object which can have methods that let you manipulate both dicts in one operation.

I second building it into a class. It would not be good if you had to remember to modify one after modifying the other. Its best to write methods that do both.

Thanks! I ended up wrapping two dicts in a class, yeah -- and overloading their __set__ and __del__ built-ins (regrettably, using 'update' -- or other methods like it -- will prolly break the dict, but since this is for my own use, I'm not too concerned).

Semi-related: I've been struggling with the notion of submitting my own PEP for Python; it's probably doomed to failure, but it'd be nice to try something. It's to solve a problem I keep encountering and having to find creative ways to code around. Specifically, it has to do with the variable positional and variable keyword arguments:

...is not permissible; it raises a TypeError (because I can't both have 'a' and 'b' as positionals and use them as keyword arguments).

So, I guess the solution is 'use names for your positionals you know you'll never use for keywords' -- but what if I have no idea what keywords I'll be using, yet? When I have a **kwargs, why can't that **kwargs contain labels that are also shared by positionals?

I realize this seems like a pretty slim use-case, but I've seen it come up a couple of times -- particularly where you're trying to use a function or class to produce objects with arbitrary attribute assignments (so you're using **kwargs to basically assign attributes). If you have any positional arguments, whatever name you use for those positional arguments cannot be keywords.

EDIT:

Spoiler:

--and actually, using **kwargs to assign attributes to objects is super-helpful, because **kwargs only accepts string to object assignment -- which means you can't use **kwargs to assign attributes and somehow manage to stash a non-string in the object's __dict__. So you eliminate a whole subclass of possible errors by using **kwargs instead of just passing a dictionary through a function. So yeah, it's a slim use-case, but it's still a pretty useful functionality to have?

I feel like this is a perfect example of what Douglas Crockford likes to call "foot-guns", i.e. language features that are perfectly calibrated to ruin your day if you don't handle them extra carefully.

As for why -- **kwargs has some extra benefits in Python 3 (such as preserving order), and by using it, you avoid the risk of accidentally mutating a dictionary (or the pain of having to define a new dictionary every time you just want to call a function).

Last edited by The Great Hippo on Wed Mar 22, 2017 2:38 am UTC, edited 1 time in total.

Why does python have this named parameter thing in the first place? The language allows for pretty JSONic notation that only takes a few characters extra. They could've invested time in pattern matching on dicts/named tuples/records rather than supporting this variadic-in-positional-and-named-arguments monstrosity. Even worse, TIL they removed pattern matching power.

Being able to perform easy introspection on functions is super important, though? Especially when you're dealing with stuff like callbacks (where you can store a function with the wrong number of parameters and never find out until it fires much, much later). Pretty sure they got rid of it to expand on inspect.Signature, which -- though a bit bulky and sometimes awkward -- makes function introspection much easier and straightforward.

I don't have a lot of experience with function parameters outside of Python, but -- aside from what I'm talking about now -- I've never encountered any major problems. Python even supports function notation (and through it, I'm pretty sure you can implement pattern checking -- or even type checking, if you'd really like).

I was about to say that's the workaround I typically use, but I don't like it because it doesn't raise errors when you fail to provide the appropriate number of positionals (less or more than two). But then I realized that since you're unpacking the tuples into two values, it will raise an error -- just not the type of error you'd expect. So, actually, yeah, that's a pretty great workaround, thanks! I'd only make one slight modification:

Xenomortis, can you define some kind of local function do_things :: BaseClass -> Object with overloads for the specific classes? (The program will still call the most specific overload, regardless of whether the object was bound to a variable of a superclass type in between, right?)

Flumble wrote:Xenomortis, can you define some kind of local function do_things :: BaseClass -> Object with overloads for the specific classes? (The program will still call the most specific overload, regardless of whether the object was bound to a variable of a superclass type in between, right?)

I don't know Java's type rules well enough to answer that.What I do know is that "Object" is passed around alarmingly liberally in this "greenfield" project.

Edit:I've gone for the "cool" thing, partly to test my suspicions that certain people aren't actually "reading" the diffs on Github pull requests.Sometimes, I'm not a nice person.

Flumble wrote:Xenomortis, can you define some kind of local function do_things :: BaseClass -> Object with overloads for the specific classes? (The program will still call the most specific overload, regardless of whether the object was bound to a variable of a superclass type in between, right?)

Flumble wrote:Xenomortis, can you define some kind of local function do_things :: BaseClass -> Object with overloads for the specific classes? (The program will still call the most specific overload, regardless of whether the object was bound to a variable of a superclass type in between, right?)

Flumble wrote:Xenomortis, can you define some kind of local function do_things :: BaseClass -> Object with overloads for the specific classes? (The program will still call the most specific overload, regardless of whether the object was bound to a variable of a superclass type in between, right?)

Flumble wrote:Xenomortis, can you define some kind of local function do_things :: BaseClass -> Object with overloads for the specific classes? (The program will still call the most specific overload, regardless of whether the object was bound to a variable of a superclass type in between, right?)

Pretty sure Java overloads are determined at compile time.

Of course they are.Otherwise we wouldn't need the confusingly named "visitor" pattern (for double-dispatch).

Yeah, as Xenomortis alludes, the magic word here is single dispatch... which method is actually called is based on the actual runtime type of the "this" parameter (ie: calling overridden methods in subclasses) but all of the other parameters are bound statically based off their declared types.

Hmm, I'm not sure why your intuition says that. Your code is clearly saying that the "a" parameter is set to 0. It's also clearly saying that the first parameter is set to 1. These conflict, so yeah, you're doing something wrong.

firechicago wrote:I feel like this is a perfect example of what Douglas Crockford likes to call "foot-guns", i.e. language features that are perfectly calibrated to ruin your day if you don't handle them extra carefully.

and then explicitly reach into the dict when you need to access those variables?

Nah, Python's named args are *great*. They give you compact function calls when it's clear, and robust self-documenting function calls otherwise. Using a dict (like you have to do in JS) draws a hard line between the two types of arguments - you *must* pass an argument positionally or in the dict, defined by the function, and can't, for example, pass the last argument by name because it always confuses you.

That said, dicts are usually the more appropriate way to handle things when you really do want to allow fully arbitrary name/value pairs to be passed in.

Xanthir wrote:Hmm, I'm not sure why your intuition says that. Your code is clearly saying that the "a" parameter is set to 0. It's also clearly saying that the first parameter is set to 1. These conflict, so yeah, you're doing something wrong.

It doesn't conflict in my mind, mostly because the way I understand it (though the way I understand it might not be correct!). But let me take you through my thought process?

This will raise a TypeError, but let's pretend for a moment it doesn't. How many reasonable ways can we interpret this?

There's really only two possible solutions; either keyword parameters take precedence over positionals, or positionals take precedence over keyword parameters. If you decide keyword parameters are more important than positionals, you're left with a=1 overriding 0; that means 0 just... disappears. On top of that, it means our variable kwargs is now just an empty dict!

Another way of saying this: If keyword parameters are more important than positionals, then foo(0, a=1) is equivalent to foo(a=1), which is also equivalent to foo(1). All of these will have the same output:

This seems way more interesting -- and more useful -- to me! Instead of having function inputs that do the exact same thing, I have a new particular behavior for a situation that would otherwise always raise an exception -- which means I'm increasing flexibility (at the cost of a little bit of extra protection).

I guess what I'm saying is -- it strikes me as intuitive because it's simultaneously both more permissive and more information rich? It makes a lot of sense to me to let positional assignments push keyword assignments into a **kwargs when **kwargs exists. I can see how it's also counter-intuitive, because of the assignment -- but so long as we're prohibiting stuff like foo(a=0, a=1), I think it expands what you can do with functions while also not adding any significant confusion.

I mean, what is this behavior meant to protect you against? If I'm calling foo with the right number of positionals and I'm assigning a keyword to one of those positionals, I'm either completely clueless about what foo does -- or I'm expecting the keyword assignment to do something other than just assign a value to the positional. And one of the things I like about Python is that it presumes you're not clueless; it just takes for granted that you kind of know what you're doing. This behavior seems to run counter to that.

EDIT: Sorry if you weren't looking for a conversation over this; I just thought I'd elaborate on why I still feel like it should work this way. That being said, I've already instituted chridd's solution for my particular case, and it works like a charm!

Now, what behavior do you expect out of "foo(0)", "foo(a=1)", and "foo(0, a=1)"?

The first should obviously print 0 - you supplied the single argument by position. The second should obviously print 1 - you supplied the single argument by name. For the third, tho: if you say 0, then passing a named "a" argument *changes behavior* based on the presence or absence of other arguments, which is confusing! (I'll assume that no one would reasonably expect it to print 1.)

This is why Python is strict in this regard - a name always means the same thing regardless of your calling signature, which makes the language more predictable.

(I'll grant that it might be useful to have some way to indicate that a given argument *can't* be passed by name, only by position, to avoid restricting the set of things that get put in kwargs. But YAGNI applies - this would be a very uncommon need, and there's already a way to do that with *args and tuple unpacking, so it's unclear that the cost/benefit of adding more complexity to argument declarations is worth it.)

Xanthir wrote:The first should obviously print 0 - you supplied the single argument by position. The second should obviously print 1 - you supplied the single argument by name. For the third, tho: if you say 0, then passing a named "a" argument *changes behavior* based on the presence or absence of other arguments, which is confusing! (I'll assume that no one would reasonably expect it to print 1.)

Is that a change in behavior, though? foo(0) and foo(0, a=1) are both exhibiting the same behavior; the behavior only changes when you remove or change the positional argument. The keyword argument isn't what's modifying behavior; the positional is. It's only when you take the positional out (foo(a=1)) that behavior suddenly changes.

This doesn't seem confusing to me at all, but it might have to do with a lack of experience outside of Python -- I also know some Python modules and extensions allow for positional-only arguments, and I'm not sure how that would conflict with this (my instincts say it would be fine, but I don't have enough experience to trust my instincts).

Grouping things that way, tho, implies that named args *never* assign to explicit arguments, only ever to a kwargs dict. Which is a fine model, nothing wrong with it, but it's definitely not the Python model, where every explicit argument has both a position and a name. I'm trying to convince you that, within the Python model, the current behavior is the most sensical, not that other models are necessarily bad. ^_^

(The model you describe is the way JS works, where to get "named arguments" you take an object as the last argument, and can use destructuring to pull out particular names if you want.)

Xanthir wrote:Grouping things that way, tho, implies that named args *never* assign to explicit arguments, only ever to a kwargs dict.

But in practice, it wouldn't -- right? foo(a=1) is a case where the function you provided isn't binding "a" to **kwargs, it's binding it to the named positional, "a". I'm effectively arguing for a minor exception to this: In cases where you've already defined "a" by it's position (foo(0)), you can load a provided keyword assignment of "a" (foo(0, a=1)) to **kwargs if a **kwargs exists (if not, obviously, raise a TypeError, because you've got too many arguments).

I guess I'm struggling to understand how this is confusing, mostly because -- I see how every explicit argument has both a position and a name, but when your call supplies this argument by position, I feel as if that should free up the name (in the call, not the actual function) for other uses. If you've supplied a position, you don't need the name to refer to this parameter anymore -- so why not use it for something else?

I already explained why I find the "frees up the argument" thing confusing - it means that passing a named arg to a function can do totally different things depending on whether you provided a positional arg or not. In particular, this can be confusing if you begin by passing all the explicit args positionally (not caring what their name is) and passing some kwargs by name, and then later end up removing some of the (optional) positional args, and suddenly your function breaks because one of the named args you passed gets grabbed by an explicit argument.

Plus, by your argument, "foo(a=1, a=2)" should also work - the first "a" sets the explicit argument, the second is stored in kwargs, which is all kinds of confusing as well, at least to me.

Python's behavior of just saying "lol u fukked up" avoids all this trouble.

Xanthir wrote:I already explained why I find the "frees up the argument" thing confusing - it means that passing a named arg to a function can do totally different things depending on whether you provided a positional arg or not.

...but shouldn't it? If I'm calling an argument with a positional in addition to a named arg, why shouldn't I expect fundamentally different behavior? If I supply a function with more arguments, shouldn't it behave differently?

Xanthir wrote:In particular, this can be confusing if you begin by passing all the explicit args positionally (not caring what their name is) and passing some kwargs by name, and then later end up removing some of the (optional) positional args, and suddenly your function breaks because one of the named args you passed gets grabbed by an explicit argument.

But how would that happen? Like, can you give me an example of a function that would break as a result of this?

The only way I could see this happening is if your function makes certain presumptions about what **kwargs contains -- and if you do that, you've written a bad function

Xanthir wrote:Plus, by your argument, "foo(a=1, a=2)" should also work - the first "a" sets the explicit argument, the second is stored in kwargs, which is all kinds of confusing as well, at least to me.

That's a fair cop, and something I've been thinking about too -- on one hand, since Python 3.5 (I think? Or is it 3.6?) preserves order of keywords, there really is no reason foo(a=1, a=2) shouldn't work in the framework I'm describing... on the other hand, that's pretty frigging ridiculous.

EDIT: I apologize if I'm being belligerent about this; if it helps, keep in mind I'm pretty certain I'm just not seeing something -- which is why I'm continuing to push the point. Because I want to figure out if I'm making some sort of gross oversight, here.

firechicago wrote:I feel like this is a perfect example of what Douglas Crockford likes to call "foot-guns", i.e. language features that are perfectly calibrated to ruin your day if you don't handle them extra carefully.

Nah, Python's named args are *great*. They give you compact function calls when it's clear, and robust self-documenting function calls otherwise. Using a dict (like you have to do in JS) draws a hard line between the two types of arguments - you *must* pass an argument positionally or in the dict, defined by the function, and can't, for example, pass the last argument by name because it always confuses you.

That said, dicts are usually the more appropriate way to handle things when you really do want to allow fully arbitrary name/value pairs to be passed in.

I didn't mean to call named arguments a foot-gun, just the specific behavior of allowing a named argument with the same name as a positional argument.

Xanthir wrote:I already explained why I find the "frees up the argument" thing confusing - it means that passing a named arg to a function can do totally different things depending on whether you provided a positional arg or not.

...but shouldn't it? If I'm calling an argument with a positional in addition to a named arg, why shouldn't I expect fundamentally different behavior?

I mean, you do get different behavior in Python when you give an argument both ways - it throws an error. That behavior is fine with me, as it explicitly marks doing so as an error. Your "different behavior" is something people would explicitly use.

Xanthir wrote:In particular, this can be confusing if you begin by passing all the explicit args positionally (not caring what their name is) and passing some kwargs by name, and then later end up removing some of the (optional) positional args, and suddenly your function breaks because one of the named args you passed gets grabbed by an explicit argument.

But how would that happen? Like, can you give me an example of a function that would break as a result of this?

The only way I could see this happening is if your function makes certain presumptions about what **kwargs contains -- and if you do that, you've written a bad function

Now, if you started out by calling "build_dict(0, a=1, b=2, default=3)", you'd get a defaultdict with 3 keys, equivalent to {"a": 1, "b": 2, "default": 3}, and non-existent keys get set to 0 when you query for them.

Now, imagine you decided you didn't want the defaulting behavior, so you remove the 0 arg: "build_dict(a=1, b=2, default=3)". Now your code is broken - you get a dict with *two* keys, {"a": 1, "b": 2}, and non-existent keys get set to 3 when you query for them. Whoops!

Oh -- I see what you mean, now. Yeah, if we have a positional that, when not supplied, gets a default assignment -- and we remove the positional from the call to rely on that default assignment -- we're no longer under the protection of the behavior I'm describing. And if we're still expecting that behavior, it can screw everything up.

I was struggling to think of a case where this could allow a keyword assignment you expect to be bound to **kwargs to be bound to a named argument, instead -- I thought about default assignments, but for some reason, I didn't think of them in relation to positionals. You're absolutely right; that could be confusing as hell.

(And now that you've put it that way, I can see how Python's current behavior -- which would throw an exception in both the call examples you used -- protects against this confusion by refusing to let you use "default" as an assignment; you end up having to figure out another solution, like just passing a dict to the function)