Tag Info

If you indefinitely keep recursing into your function, Sooner or later you will get the following error -
RuntimeError: maximum recursion depth exceeded
A simple example to show this -
>>> def a():
... global i
... i += 1
... a()
Then I ran this function using as -
>>> i = 0
>>> a()
This gave me the above ...

In your code, myIter(n) actually does work -- it loops 100 times.
myGen(n), on the other hand, simply builds the generator -- and that's it. It doesn't count to 100. All you're doing is timing how long it takes to build the object, and you're timing it in an unreliable way. If we use timeit (here using IPython to make things simpler):
>>> ...

First of all: the abs() calls are entirely redundant if you are squaring the result anyway.
Next, you may be reading the profile output wrong; don't mistake the cumulative times with the time spent only on the function call itself; you are calling abs() many many times so the accumulated time will raise rapidly.
Moreover, profiling adds a lot of overhead ...

You can use generator expression and next:
return next((foo for foo in foos if foo.text == expected_text), None)
Next will return the first yielded item that meet the condition.
In case of no matched item, next will return the default value None.

You could create a new Counter with the same keys and only the increment then sum that with the original:
increment = Counter(dict.fromkeys(x, n))
y = x + increment
Not that Counter objects are suited for trillions of keys; if your datasets are that large consider using different tools, like a database for example.
Demo:
>>> from collections ...

You are mixing object types.
'£' is a bytestring, containing encoded data. That those bytes happen to represent a pound sign in your terminal or console is neither here nor there, it could just as much have been a pixel in an image. You terminal or console is configured to produce and accept UTF-8 data instead, so the actual content of that bytestring is ...

To use __new__ in Python 2.x, the class should be new-style class (class derived from object).
And call to super() is different from that of Python 3.x.
class ExampleClass(object): # <---
def __new__(cls):
print("new")
return super(ExampleClass, cls).__new__(cls) # <---
def __init__(self):
print("init")

Two easy changes to start with. One, don't incrementally write the output file, it adds a lot of unnecessary overhead, and is your biggest problem by far.
Second, you seem to be going through a lot of steps to pull out the triplet. Something like this would be more efficient, and the .apply removes some of the looping overhead.
def triplet(row):
loc ...

Yes, file.close() can throw an IOError exception. This can happen when the file system uses quotas, for example. See the C close() function man page:
Not checking the return value of close() is a common but nevertheless serious programming error. It is quite possible that errors on a previous write(2) operation are first reported at the final close(). ...

len(input_list[i:])
This makes a copy of the list from position i to the end, an O(n) operation (where n is the number of elements in the slice). Then it asks it what its length is: O(1). So O(n) overall.
len(input_list) -i
This is just asking for length (O(1)) and then subtracting (also O(1)). So O(1) overall.
See: ...

If you want to create a dictionary object from that string, you can use the dict function and a generator expression which splits the string based on whitespaces and then by =, like this
>>> data = 'class_="template_title" height="50" valign="bottom" width="535"'
>>> dict(item.split('=') for item in data.split())
{'width': '"535"', ...

The condition causing the issue is -
if p**2 > n: break
Lets take an example - 7 , when we are checking if 7 is prime or not , we have already found out that - [2,3,5] are primes, the above condition breaks the for loop, when we check 3 as 3**2 = 9 which is greater than 7 .
Remove that condition and it works fine (Though its very slow).
In the ...

plt.subplots() returns a tuple, (fig,axarr), where axarr is an array of axis objects. So if you have two rows and two columns, axarr is a [2,2] array of axes, but the subplots method only will return two objects, not five. This is true for any tuple, and not limited plt.subplots().
So the following will run:
a = 'banana'
b = np.array([1,2,3,4])
tup = ...

If c is your string column. map is used to apply a function elementwise (and of course you wouldn't have to chain it all together like this)
df[c] = (df[c].str.lower()
.str.decode('utf-8')
.map(lambda x: unicodedata.normalize('NFKD', x))
.str.encode('ascii', 'ignore'))

It's printing 0 because the product of the "last" items in the dictionary is 0. If you want to know the products of each item in turn then you need to print inside the loop. If you want a total then you should either add to the existing value or use sum() with a generator expression (genex).

For your special use case of argmax, you may use np.where and set the masked values to negative infinity:
>>> inf = np.iinfo('i8').max
>>> np.where(cond, arr, -inf).argmax(axis=1)
array([1, 2])
alternatively, you can manually broadcast using np.tile:
>>> np.ma.array(np.tile(arr, 2).reshape(2, 3), mask=~cond).argmax(axis=1)
...

NightShadeQueen's answer is correct.
There is one additional detail: In the situation when i is greater than the length of the list, then len(input_list[i:]) will be zero, but len(input_list) - i will be a negative number.
This is because the slice operator will limit the indexes to the endpoints of the array.

EDIT 1 adds more information on file pointer operations.
The 'w' mode you used is for write-only.
To open a file for read/write access, you can use a file mode of 'r+' or 'rb+', but even if you do this, you will still have to rewind the file pointer to the start of the file using seek to read it after you write it.
f = open('filename', 'rb+')
The file ...

This is terrible code to begin with, because it teaches you all sorts of bad habits, and is outdated to boot. When dealing with file objects, use the with statement and a context manager:
with open(filename, "w") as target:
print "Truncating the file. Goodbye!"
target.truncate()
print "Now I'm going to ask you for three lines."
line1 = ...

The import name is used to resolve the directory where the Flask application is installed (see the get_root_path function in flask/helpers.py). This makes it possible for things like render_template, send_static_file, and relative file paths in config to resolve to files in the application's folder without needing a file path to be provided.
Consider an ...

append() returns None so it doesn't make sense using that in conjunction with map function. A simple for loop would suffice:
a = []
for i in range(5):
a.append(i)
print a
Alternatively if you want to use list comprehensions / map function;
a = range(5) # Python 2.x
a = list(range(5)) # Python 3.x
a = [i for i in range(5)]
a = map(lambda i: i, ...

You are indeed overthinking. You can filter by model objects, and since you want an OR operation, you can use Q objects like so:
from django.db.models import Q
user = request.user
conditions = Q(creator=user) | Q(friend=user)
relevant_friends_list = FriendShip.objects.filter(conditions)
Note that request.user is only available inside a view where the ...

I am assuming you are using Python 3.x , the actual reason why your code with map() does not work is because in Python 3.x , map() returns a generator object , unless you iterate over the generator object returned by map() , the lambda function is not called . Try doing list(map(...)) , and you should see a getting filled.
That being said , what you are ...