The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Could not read the parquet files: Arrow type extension<arrow.json> does not have a datasets dtype equivalent.
Error code:   FileSystemError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

question_id
int64
score
int64
creation_date
int64
tags
string
instruction
string
output
string
231,767
13,124
1,224,800,471
["python","iterator","generator","yield"]
What does the "yield" keyword do in Python? What functionality does the yield keyword in Python provide? For example, I'm trying to understand this code1: def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild And this is the caller: result, candidates = [], [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result What happens when the method _get_child_candidates is called? Is a list returned? A single element? Is it called again? When will subsequent calls stop? 1. This piece of code was written by Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.
To understand what yield does, you must understand what generators are. And before you can understand generators, you must understand iterables. Iterables When you create a list, you can read its items one by one. Reading its items one by one is called iteration: >>> mylist = [1, 2, 3] >>> for i in mylist: ... print(i) 1 2 3 mylist is an iterable. When you use a list comprehension, you create a list, and so an iterable: >>> mylist = [x*x for x in range(3)] >>> for i in mylist: ... print(i) 0 1 4 Everything you can use "for... in..." on is an iterable; lists, strings, files... These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values. Generators Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly: >>> mygenerator = (x*x for x in range(3)) >>> for i in mygenerator: ... print(i) 0 1 4 It is just the same except you used () instead of []. BUT, you cannot perform for i in mygenerator a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end after calculating 4, one by one. Yield yield is a keyword that is used like return, except the function will return a generator. >>> def create_generator(): ... mylist = range(3) ... for i in mylist: ... yield i*i ... >>> mygenerator = create_generator() # create a generator >>> print(mygenerator) # mygenerator is an object! <generator object create_generator at 0xb7555c34> >>> for i in mygenerator: ... print(i) 0 1 4 Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once. To master yield, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky. Then, your code will continue from where it left off each time for uses the generator. Now the hard part: The first time the for calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield, then it'll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting yield. That can be because the loop has come to an end, or because you no longer satisfy an "if/else". Your code explained Generator: # Here you create the method of the node object that will return the generator def _get_child_candidates(self, distance, min_dist, max_dist): # Here is the code that will be called each time you use the generator object: # If there is still a child of the node object on its left # AND if the distance is ok, return the next child if self._leftchild and distance - max_dist < self._median: yield self._leftchild # If there is still a child of the node object on its right # AND if the distance is ok, return the next child if self._rightchild and distance + max_dist >= self._median: yield self._rightchild # If the function arrives here, the generator will be considered empty # There are no more than two values: the left and the right children Caller: # Create an empty list and a list with the current object reference result, candidates = list(), [self] # Loop on candidates (they contain only one element at the beginning) while candidates: # Get the last candidate and remove it from the list node = candidates.pop() # Get the distance between obj and the candidate distance = node._get_dist(obj) # If the distance is ok, then you can fill in the result if distance <= max_dist and distance >= min_dist: result.extend(node._values) # Add the children of the candidate to the candidate's list # so the loop will keep running until it has looked # at all the children of the children of the children, etc. of the candidate candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result This code contains several smart parts: The loop iterates on a list, but the list expands while the loop is being iterated. It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) exhausts all the values of the generator, but while keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node. The extend() method is a list object method that expects an iterable and adds its values to the list. Usually, we pass a list to it: >>> a = [1, 2] >>> b = [3, 4] >>> a.extend(b) >>> print(a) [1, 2, 3, 4] But in your code, it gets a generator, which is good because: You don't need to read the values twice. You may have a lot of children and you don't want them all stored in memory. And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question... You can stop here, or read a little bit to see an advanced use of a generator: Controlling a generator exhaustion >>> class Bank(): # Let's create a bank, building ATMs ... crisis = False ... def create_atm(self): ... while not self.crisis: ... yield "$100" >>> hsbc = Bank() # When everything's ok the ATM gives you as much as you want >>> corner_street_atm = hsbc.create_atm() >>> print(corner_street_atm.next()) $100 >>> print(corner_street_atm.next()) $100 >>> print([corner_street_atm.next() for cash in range(5)]) ['$100', '$100', '$100', '$100', '$100'] >>> hsbc.crisis = True # Crisis is coming, no more money! >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> wall_street_atm = hsbc.create_atm() # It's even true for new ATMs >>> print(wall_street_atm.next()) <type 'exceptions.StopIteration'> >>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business >>> for cash in brand_new_atm: ... print cash $100 $100 $100 $100 $100 $100 $100 $100 $100 ... Note: For Python 3, useprint(corner_street_atm.__next__()) or print(next(corner_street_atm)) It can be useful for various things like controlling access to a resource. Itertools, your best friend The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator? Chain two generators? Group values in a nested list with a one-liner? Map / Zip without creating another list? Then just import itertools. An example? Let's see the possible orders of arrival for a four-horse race: >>> horses = [1, 2, 3, 4] >>> races = itertools.permutations(horses) >>> print(races) <itertools.permutations object at 0xb754f1dc> >>> print(list(itertools.permutations(horses))) [(1, 2, 3, 4), (1, 2, 4, 3), (1, 3, 2, 4), (1, 3, 4, 2), (1, 4, 2, 3), (1, 4, 3, 2), (2, 1, 3, 4), (2, 1, 4, 3), (2, 3, 1, 4), (2, 3, 4, 1), (2, 4, 1, 3), (2, 4, 3, 1), (3, 1, 2, 4), (3, 1, 4, 2), (3, 2, 1, 4), (3, 2, 4, 1), (3, 4, 1, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 1, 3, 2), (4, 2, 1, 3), (4, 2, 3, 1), (4, 3, 1, 2), (4, 3, 2, 1)] Understanding the inner mechanisms of iteration Iteration is a process implying iterables (implementing the __iter__() method) and iterators (implementing the __next__() method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables. There is more about it in this article about how for loops work.
419,163
8,430
1,231,301,460
["python","namespaces","program-entry-point","python-module","idioms"]
What does if __name__ == "__main__": do? What does this do, and why should one include the if statement? if __name__ == "__main__": print("Hello, World!") If you are trying to close a question where someone should be using this idiom and isn't, consider closing as a duplicate of Why is Python running my module when I import it, and how do I stop it? instead. For questions where someone simply hasn't called any functions, or incorrectly expects a function named main to be used as an entry point automatically, use Why doesn't the main() function run when I start a Python script? Where does the script start running?.
Short Answer It's boilerplate code that protects users from accidentally invoking the script when they didn't intend to. Here are some common problems when the guard is omitted from a script: If you import the guardless script in another script (e.g. import my_script_without_a_name_eq_main_guard), then the latter script will trigger the former to run at import time and using the second script's command line arguments. This is almost always a mistake. If you have a custom class in the guardless script and save it to a pickle file, then unpickling it in another script will trigger an import of the guardless script, with the same problems outlined in the previous bullet. Long Answer To better understand why and how this matters, we need to take a step back to understand how Python initializes scripts and how this interacts with its module import mechanism. Whenever the Python interpreter reads a source file, it does two things: it sets a few special variables like __name__, and then it executes all of the code found in the file. Let's see how this works and how it relates to your question about the __name__ checks we always see in Python scripts. Code Sample Let's use a slightly different code sample to explore how imports and scripts work. Suppose the following is in a file called foo.py. # Suppose this is foo.py. print("before import") import math print("before function_a") def function_a(): print("Function A") print("before function_b") def function_b(): print("Function B {}".format(math.sqrt(100))) print("before __name__ guard") if __name__ == '__main__': function_a() function_b() print("after __name__ guard") Special Variables When the Python interpreter reads a source file, it first defines a few special variables. In this case, we care about the __name__ variable. When Your Module Is the Main Program If you are running your module (the source file) as the main program, e.g. python foo.py the interpreter will assign the hard-coded string "__main__" to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top # of your module when run as the main program. __name__ = "__main__" When Your Module Is Imported By Another On the other hand, suppose some other module is the main program and it imports your module. This means there's a statement like this in the main program, or in some other module the main program imports: # Suppose this is in some other main program. import foo The interpreter will search for your foo.py file (along with searching for a few other variants), and prior to executing that module, it will assign the name "foo" from the import statement to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top # of your module when it's imported from another module. __name__ = "foo" Executing the Module's Code After the special variables are set up, the interpreter executes all the code in the module, one statement at a time. You may want to open another window on the side with the code sample so you can follow along with this explanation. Always It prints the string "before import" (without quotes). It loads the math module and assigns it to a variable called math. This is equivalent to replacing import math with the following (note that __import__ is a low-level function in Python that takes a string and triggers the actual import): # Find and load a module given its string name, "math", # then assign it to a local variable called math. math = __import__("math") It prints the string "before function_a". It executes the def block, creating a function object, then assigning that function object to a variable called function_a. It prints the string "before function_b". It executes the second def block, creating another function object, then assigning it to a variable called function_b. It prints the string "before __name__ guard". Only When Your Module Is the Main Program If your module is the main program, then it will see that __name__ was indeed set to "__main__" and it calls the two functions, printing the strings "Function A" and "Function B 10.0". Only When Your Module Is Imported by Another (instead) If your module is not the main program but was imported by another one, then __name__ will be "foo", not "__main__", and it'll skip the body of the if statement. Always It will print the string "after __name__ guard" in both situations. Summary In summary, here's what'd be printed in the two cases: # What gets printed if foo is the main program before import before function_a before function_b before __name__ guard Function A Function B 10.0 after __name__ guard # What gets printed if foo is imported as a regular module before import before function_a before function_b before __name__ guard after __name__ guard Why Does It Work This Way? You might naturally wonder why anybody would want this. Well, sometimes you want to write a .py file that can be both used by other programs and/or modules as a module, and can also be run as the main program itself. Examples: Your module is a library, but you want to have a script mode where it runs some unit tests or a demo. Your module is only used as a main program, but it has some unit tests, and the testing framework works by importing .py files like your script and running special test functions. You don't want it to try running the script just because it's importing the module. Your module is mostly used as a main program, but it also provides a programmer-friendly API for advanced users. Beyond those examples, it's elegant that running a script in Python is just setting up a few magic variables and importing the script. "Running" the script is a side effect of importing the script's module. Food for Thought Question: Can I have multiple __name__ checking blocks? Answer: it's strange to do so, but the language won't stop you. Suppose the following is in foo2.py. What happens if you say python foo2.py on the command-line? Why? # Suppose this is foo2.py. import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters def function_a(): print("a1") from foo2 import function_b print("a2") function_b() print("a3") def function_b(): print("b") print("t1") if __name__ == "__main__": print("m1") function_a() print("m2") print("t2") Now, figure out what will happen in foo3.py (having removed the __name__ check): # Suppose this is foo3.py. import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters def function_a(): print("a1") from foo3 import function_b print("a2") function_b() print("a3") def function_b(): print("b") print("t1") print("m1") function_a() print("m2") print("t2") What will this do when used as a script? When imported as a module? # Suppose this is in foo4.py __name__ = "__main__" def bar(): print("bar") print("before __name__ guard") if __name__ == "__main__": bar() print("after __name__ guard")
89,228
6,286
1,221,701,730
["python","shell","terminal","subprocess","command"]
How do I execute a program or call a system command? How do I call an external command within Python as if I had typed it in a shell or command prompt?
Use subprocess.run: import subprocess subprocess.run(["ls", "-l"]) Another common way is os.system but you shouldn't use it because it is unsafe if any parts of the command come from outside your program or can contain spaces or other special characters, also subprocess.run is generally more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc.). Even the documentation for os.system recommends using subprocess instead. On Python 3.4 and earlier, use subprocess.call instead of .run: subprocess.call(["ls", "-l"])
522,563
5,669
1,233,960,474
["python","loops","list"]
How can I access the index value in a 'for' loop? How do I access the index while iterating over a sequence with a for loop? xs = [8, 23, 45] for x in xs: print("item #{} = {}".format(index, x)) Desired output: item #1 = 8 item #2 = 23 item #3 = 45
Use the built-in function enumerate(): for idx, x in enumerate(xs): print(idx, x) It is non-Pythonic to manually index via for i in range(len(xs)): x = xs[i] or manually manage an additional state variable. Check out PEP 279 for more.
952,914
5,505
1,244,147,405
["python","list","multidimensional-array","flatten"]
How do I make a flat list out of a list of lists? I have a list of lists like [ [1, 2, 3], [4, 5, 6], [7], [8, 9] ] How can I flatten it to get [1, 2, 3, 4, 5, 6, 7, 8, 9]? If your list of lists comes from a nested list comprehension, the problem can be solved more simply/directly by fixing the comprehension; please see How can I get a flat result from a list comprehension instead of a nested list?. The most popular solutions here generally only flatten one "level" of the nested list. See Flatten an irregular (arbitrarily nested) list of lists for solutions that completely flatten a deeply nested structure (recursively, in general).
A list of lists named xss can be flattened using a nested list comprehension: flat_list = [ x for xs in xss for x in xs ] The above is equivalent to: flat_list = [] for xs in xss: for x in xs: flat_list.append(x) Here is the corresponding function: def flatten(xss): return [x for xs in xss for x in xs] This is the fastest method. As evidence, using the timeit module in the standard library, we see: $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' '[x for xs in xss for x in xs]' 10000 loops, best of 3: 143 usec per loop $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' 'sum(xss, [])' 1000 loops, best of 3: 969 usec per loop $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' 'reduce(lambda xs, ys: xs + ys, xss)' 1000 loops, best of 3: 1.1 msec per loop Explanation: the methods based on + (including the implied use in sum) are, of necessity, O(L**2) when there are L sublists -- as the intermediate result list keeps getting longer, at each step a new intermediate result list object gets allocated, and all the items in the previous intermediate result must be copied over (as well as a few new ones added at the end). So, for simplicity and without actual loss of generality, say you have L sublists of M items each: the first M items are copied back and forth L-1 times, the second M items L-2 times, and so on; total number of copies is M times the sum of x for x from 1 to L excluded, i.e., M * (L**2)/2. The list comprehension just generates one list, once, and copies each item over (from its original place of residence to the result list) also exactly once.
509,211
4,704
1,233,700,262
["python","slice","sequence"]
How slicing in Python works How does Python's slice notation work? That is: when I write code like a[x:y:z], a[:], a[::2] etc., how can I understand which elements end up in the slice? See Why are slice and range upper-bound exclusive? to learn why xs[0:2] == [xs[0], xs[1]], not [..., xs[2]]. See Make a new list containing every Nth item in the original list for xs[::N]. See How does assignment work with list slices? to learn what xs[0:2] = ["a", "b"] does.
The syntax is: a[start:stop] # items start through stop-1 a[start:] # items start through the rest of the array a[:stop] # items from the beginning through stop-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: a[start:stop:step] # start through not past stop, by step The key point to remember is that the :stop value represents the first value that is not in the selected slice. So, the difference between stop and start is the number of elements selected (if step is 1, the default). The other feature is that start or stop may be a negative number, which means it counts from the end of the array instead of the beginning. So: a[-1] # last item in the array a[-2:] # last two items in the array a[:-2] # everything except the last two items Similarly, step may be a negative number: a[::-1] # all items in the array, reversed a[1::-1] # the first two items, reversed a[:-3:-1] # the last two items, reversed a[-3::-1] # everything except the last two items, reversed Python is kind to the programmer if there are fewer items than you ask for. For example, if you ask for a[:-2] and a only contains one element, you get an empty list instead of an error. Sometimes you would prefer the error, so you have to be aware that this may happen. Relationship with the slice object A slice object can represent a slicing operation, i.e.: a[start:stop:step] is equivalent to: a[slice(start, stop, step)] Slice objects also behave slightly differently depending on the number of arguments, similar to range(), i.e. both slice(stop) and slice(start, stop[, step]) are supported. To skip specifying a given argument, one might use None, so that e.g. a[start:] is equivalent to a[slice(start, None)] or a[::-1] is equivalent to a[slice(None, None, -1)]. While the :-based notation is very helpful for simple slicing, the explicit use of slice() objects simplifies the programmatic generation of slicing.
16,476,924
4,246
1,368,169,489
["python","pandas","dataframe","loops"]
How can I iterate over rows in a Pandas DataFrame? I have a pandas dataframe, df: c1 c2 0 10 100 1 11 110 2 12 120 How do I iterate over the rows of this dataframe? For every row, I want to access its elements (values in cells) by the name of the columns. For example: for row in df.rows: print(row['c1'], row['c2']) I found a similar question, which suggests using either of these: for date, row in df.T.iteritems(): for row in df.iterrows(): But I do not understand what the row object is and how I can work with it.
DataFrame.iterrows is a generator which yields both the index and row (as a Series): import pandas as pd df = pd.DataFrame({'c1': [10, 11, 12], 'c2': [100, 110, 120]}) df = df.reset_index() # make sure indexes pair with number of rows for index, row in df.iterrows(): print(row['c1'], row['c2']) 10 100 11 110 12 120 Obligatory disclaimer from the documentation Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed and can be avoided with one of the following approaches: Look for a vectorized solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing, … When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply() instead of iterating over the values. See the docs on function application. If you need to do iterative manipulations on the values but performance is important, consider writing the inner loop with cython or numba. See the enhancing performance section for some examples of this approach. Other answers in this thread delve into greater depth on alternatives to iter* functions if you are interested to learn more.
423,379
4,047
1,231,393,502
["python","global-variables","scope"]
How can I use a global variable in a function? How do I create or use a global variable inside a function? How do I use a global variable that was defined in one function inside other functions? Failing to use the global keyword where appropriate often causes UnboundLocalError. The precise rules for this are explained at UnboundLocalError on local variable when reassigned after first use. Generally, please close other questions as a duplicate of that question when an explanation is sought, and this question when someone simply needs to know the global keyword.
You can use a global variable within other functions by declaring it as global within each function that assigns a value to it: globvar = 0 def set_globvar_to_one(): global globvar # Needed to modify global copy of globvar globvar = 1 def print_globvar(): print(globvar) # No need for global declaration to read value of globvar set_globvar_to_one() print_globvar() # Prints 1 Since it's unclear whether globvar = 1 is creating a local variable or changing a global variable, Python defaults to creating a local variable, and makes you explicitly choose the other behavior with the global keyword. See other answers if you want to share a global variable across modules.
415,511
3,992
1,231,217,663
["python","datetime","time"]
How do I get the current time in Python? How do I get the current time in Python?
Use datetime: >>> import datetime >>> now = datetime.datetime.now() >>> now datetime.datetime(2009, 1, 6, 15, 8, 24, 78915) >>> str(now) '2009-01-06 15:08:24.078915' For just the clock time without the date: >>> now.time() datetime.time(15, 8, 24, 78915) >>> str(now.time()) '15:08:24.078915' To save typing, you can import the datetime object from the datetime module: from datetime import datetime Then remove the prefix datetime. from all of the above.
606,191
3,885
1,236,082,981
["python","string","python-3.x"]
Convert bytes to a string in Python 3 I captured the standard output of an external program into a bytes object: >>> from subprocess import * >>> stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0] >>> stdout b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n' I want to convert that to a normal Python string, so that I can print it like this: >>> print(stdout) -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2 How do I convert the bytes object to a str with Python 3? See Best way to convert string to bytes in Python 3? for the other way around.
Decode the bytes object to produce a string: >>> b"abcde".decode("utf-8") 'abcde' The above example assumes that the bytes object is in UTF-8, because it is a common encoding. However, you should use the encoding your data is actually in!
1,436,703
3,831
1,253,161,665
["python","magic-methods","repr"]
What is the difference between __str__ and __repr__? What is the difference between __str__ and __repr__ in Python?
Alex Martelli summarized well but, surprisingly, was too succinct. First, let me reiterate the main points in Alex’s post: The default implementation is useless (it’s hard to think of one which wouldn’t be, but yeah) __repr__ goal is to be unambiguous __str__ goal is to be readable Container’s __str__ uses contained objects’ __repr__ Default implementation is useless This is mostly a surprise because Python’s defaults tend to be fairly useful. However, in this case, having a default for __repr__ which would act like: return "%s(%r)" % (self.__class__, self.__dict__) Or in new f-string formatting: return f"{self.__class__!s}({self.__dict__!r})" would have been too dangerous (for example, too easy to get into infinite recursion if objects reference each other). So Python cops out. Note that there is one default which is true: if __repr__ is defined, and __str__ is not, the object will behave as though __str__=__repr__. This means, in simple terms: almost every object you implement should have a functional __repr__ that’s usable for understanding the object. Implementing __str__ is optional: do that if you need a “pretty print” functionality (for example, used by a report generator). The goal of __repr__ is to be unambiguous Let me come right out and say it — I do not believe in debuggers. I don’t really know how to use any debugger, and have never used one seriously. Furthermore, I believe that the big fault in debuggers is their basic nature — most failures I debug happened a long long time ago, in a galaxy far far away. This means that I do believe, with religious fervor, in logging. Logging is the lifeblood of any decent fire-and-forget server system. Python makes it easy to log: with maybe some project specific wrappers, all you need is a log(INFO, "I am in the weird function and a is", a, "and b is", b, "but I got a null C — using default", default_c) But you have to do the last step — make sure every object you implement has a useful repr, so code like that can just work. This is why the “eval” thing comes up: if you have enough information so eval(repr(c))==c, that means you know everything there is to know about c. If that’s easy enough, at least in a fuzzy way, do it. If not, make sure you have enough information about c anyway. I usually use an eval-like format: "MyClass(this=%r,that=%r)" % (self.this,self.that). It does not mean that you can actually construct MyClass, or that those are the right constructor arguments — but it is a useful form to express “this is everything you need to know about this instance”. Note: I used %r above, not %s. You always want to use repr() [or %r formatting character, equivalently] inside __repr__ implementation, or you’re defeating the goal of repr. You want to be able to differentiate MyClass(3) and MyClass("3"). The goal of __str__ is to be readable Specifically, it is not intended to be unambiguous — notice that str(3)==str("3"). Likewise, if you implement an IP abstraction, having the str of it look like 192.168.1.1 is just fine. When implementing a date/time abstraction, the str can be "2010/4/12 15:35:22", etc. The goal is to represent it in a way that a user, not a programmer, would want to read it. Chop off useless digits, pretend to be some other class — as long is it supports readability, it is an improvement. Container’s __str__ uses contained objects’ __repr__ This seems surprising, doesn’t it? It is a little, but how readable would it be if it used their __str__? [moshe is, 3, hello world, this is a list, oh I don't know, containing just 4 elements] Not very. Specifically, the strings in a container would find it way too easy to disturb its string representation. In the face of ambiguity, remember, Python resists the temptation to guess. If you want the above behavior when you’re printing a list, just print("[" + ", ".join(lst) + "]") (you can probably also figure out what to do about dictionaries). Summary Implement __repr__ for any class you implement. This should be second nature. Implement __str__ if you think it would be useful to have a string version which errs on the side of readability.
17,071,871
3,632
1,371,058,925
["python","pandas","dataframe","indexing","filter"]
How do I select rows from a DataFrame based on column values? How can I select rows from a DataFrame based on values in some column in Pandas? In SQL, I would use: SELECT * FROM table WHERE column_name = some_value
To select rows whose column value equals a scalar, some_value, use ==: df.loc[df['column_name'] == some_value] To select rows whose column value is in an iterable, some_values, use isin: df.loc[df['column_name'].isin(some_values)] Combine multiple conditions with &: df.loc[(df['column_name'] >= A) & (df['column_name'] <= B)] Note the parentheses. Due to Python's operator precedence rules, & binds more tightly than <= and >=. Thus, the parentheses in the last example are necessary. Without the parentheses df['column_name'] >= A & df['column_name'] <= B is parsed as df['column_name'] >= (A & df['column_name']) <= B which results in a Truth value of a Series is ambiguous error. To select rows whose column value does not equal some_value, use !=: df.loc[df['column_name'] != some_value] The isin returns a boolean Series, so to select rows whose value is not in some_values, negate the boolean Series using ~: df = df.loc[~df['column_name'].isin(some_values)] # .loc is not in-place replacement For example, import pandas as pd import numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8) * 2}) print(df) # A B C D # 0 foo one 0 0 # 1 bar one 1 2 # 2 foo two 2 4 # 3 bar three 3 6 # 4 foo two 4 8 # 5 bar two 5 10 # 6 foo one 6 12 # 7 foo three 7 14 print(df.loc[df['A'] == 'foo']) yields A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 If you have multiple values you want to include, put them in a list (or more generally, any iterable) and use isin: print(df.loc[df['B'].isin(['one','three'])]) yields A B C D 0 foo one 0 0 1 bar one 1 2 3 bar three 3 6 6 foo one 6 12 7 foo three 7 14 Note, however, that if you wish to do this many times, it is more efficient to make an index first, and then use df.loc: df = df.set_index(['B']) print(df.loc['one']) yields A C D B one foo 0 0 one bar 1 2 one foo 6 12 or, to include multiple values from the index use df.index.isin: df.loc[df.index.isin(['one','two'])] yields A C D B one foo 0 0 one bar 1 2 two foo 2 4 two foo 4 8 two bar 5 10 one foo 6 12
3,437,059
3,585
1,281,322,370
["python","string","substring","contains"]
Does Python have a string 'contains' substring method? I'm looking for a string.contains or string.indexof method in Python. I want to do: if not somestring.contains("blah"): continue
Use the in operator: if "blah" not in somestring: continue Note: This is case-sensitive.
6,996,603
3,543
1,312,895,142
["python","file-io","directory","delete-file"]
How can I delete a file or folder in Python? How can I delete a file or folder in Python?
Use one of these methods: pathlib.Path.unlink() removes a file or symbolic link. pathlib.Path.rmdir() removes an empty directory. shutil.rmtree() deletes a directory and all its contents. On Python 3.3 and below, you can use these methods instead of the pathlib ones: os.remove() removes a file. os.unlink() removes a symbolic link. os.rmdir() removes an empty directory.
1,132,941
3,509
1,247,680,837
["python","language-design","default-parameters","least-astonishment"]
"Least Astonishment" and the Mutable Default Argument def foo(a=[]): a.append(5) return a Python novices expect this function called with no parameter to always return a list with only one element: [5]. The result is different and astonishing: >>> foo() [5] >>> foo() [5, 5] >>> foo() [5, 5, 5] >>> foo() [5, 5, 5, 5] >>> foo() The behavior has an underlying explanation, but it is unexpected if you don't understand internals. What is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?) Edit: Baczek made an interesting example. Together with your comments and Utaal's in particular, I elaborated: def a(): print("a executed") return [] def b(x=a()): x.append(5) print(x) a executed >>> b() [5] >>> b() [5, 5] It seems that the design decision was relative to where to put the scope of parameters: inside the function, or "together" with it? Doing the binding inside the function would mean that x is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the def line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time. The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.
Actually, this is not a design flaw, and it is not because of internals or performance. It comes simply from the fact that functions in Python are first-class objects, and not only a piece of code. As soon as you think of it this way, then it completely makes sense: a function is an object being evaluated on its definition; default parameters are kind of "member data" and therefore their state may change from one call to the other - exactly as in any other object. In any case, the Effbot (Fredrik Lundh) has a very nice explanation of the reasons for this behavior in Default Parameter Values in Python. I found it very clear, and I really suggest reading it for a better knowledge of how function objects work.
36,901
3,498
1,220,195,075
["python","syntax","parameter-passing","variadic-functions","argument-unpacking"]
What does ** (double star/asterisk) and * (star/asterisk) do for parameters? What do *args and **kwargs mean in these function definitions? def foo(x, y, *args): pass def bar(x, y, **kwargs): pass See What do ** (double star/asterisk) and * (star/asterisk) mean in a function call? for the complementary question about arguments.
The *args and **kwargs are common idioms to allow an arbitrary number of arguments to functions, as described in the section more on defining functions in the Python tutorial. The *args will give you all positional arguments as a tuple: def foo(*args): for a in args: print(a) foo(1) # 1 foo(1, 2, 3) # 1 # 2 # 3 The **kwargs will give you all keyword arguments as a dictionary: def bar(**kwargs): for a in kwargs: print(a, kwargs[a]) bar(name='one', age=27) # name one # age 27 Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, bar=None, **kwargs): print(kind, args, bar, kwargs) foo(123, 'a', 'b', apple='red') # 123 ('a', 'b') None {'apple': 'red'} It is also possible to use this the other way around: def foo(a, b, c): print(a, b, c) obj = {'b':10, 'c':'lee'} foo(100, **obj) # 100 10 lee Another usage of the *l idiom is to unpack argument lists when calling a function. def foo(bar, lee): print(bar, lee) baz = [1, 2] foo(*baz) # 1 2 In Python 3 it is possible to use *l on the left side of an assignment (Extended Iterable Unpacking), though it gives a list instead of a tuple in this context: first, *rest = [1, 2, 3, 4] # first = 1 # rest = [2, 3, 4] Also Python 3 adds a new semantic (refer PEP 3102): def func(arg1, arg2, arg3, *, kwarg1, kwarg2): pass Such function accepts only 3 positional arguments, and everything after * can only be passed as keyword arguments. Note: A Python dict, semantically used for keyword argument passing, is arbitrarily ordered. However, in Python 3.6+, keyword arguments are guaranteed to remember insertion order. "The order of elements in **kwargs now corresponds to the order in which keyword arguments were passed to the function." - What’s New In Python 3.6. In fact, all dicts in CPython 3.6 will remember insertion order as an implementation detail, and this becomes standard in Python 3.7.
2,612,802
3,368
1,270,889,386
["python","list","clone","mutable"]
How do I clone a list so that it doesn't change unexpectedly after assignment? While using new_list = my_list, any modifications to new_list changes my_list every time. Why is this, and how can I clone or copy the list to prevent it? For example: >>> my_list = [1, 2, 3] >>> new_list = my_list >>> new_list.append(4) >>> my_list [1, 2, 3, 4]
new_list = my_list doesn't actually create a second list. The assignment just copies the reference to the list, not the actual list, so both new_list and my_list refer to the same list after the assignment. To actually copy the list, you have several options: You can use the built-in list.copy() method (available since Python 3.3): new_list = old_list.copy() You can slice it: new_list = old_list[:] Alex Martelli's opinion (at least back in 2007) about this is, that it is a weird syntax and it does not make sense to use it ever. ;) (In his opinion, the next one is more readable). You can use the built-in list() constructor: new_list = list(old_list) You can use generic copy.copy(): import copy new_list = copy.copy(old_list) This is a little slower than list() because it has to find out the datatype of old_list first. If you need to copy the elements of the list as well, use generic copy.deepcopy(): import copy new_list = copy.deepcopy(old_list) Obviously the slowest and most memory-needing method, but sometimes unavoidable. This operates recursively; it will handle any number of levels of nested lists (or other containers). Example: import copy class Foo(object): def __init__(self, val): self.val = val def __repr__(self): return f'Foo({self.val!r})' foo = Foo(1) a = ['foo', foo] b = a.copy() c = a[:] d = list(a) e = copy.copy(a) f = copy.deepcopy(a) # edit orignal list and instance a.append('baz') foo.val = 5 print(f'original: {a}\nlist.copy(): {b}\nslice: {c}\nlist(): {d}\ncopy: {e}\ndeepcopy: {f}') Result: original: ['foo', Foo(5), 'baz'] list.copy(): ['foo', Foo(5)] slice: ['foo', Foo(5)] list(): ['foo', Foo(5)] copy: ['foo', Foo(5)] deepcopy: ['foo', Foo(1)]
1,720,421
3,238
1,258,009,449
["python","list","concatenation"]
How do I concatenate two lists in Python? How do I concatenate two lists in Python? Example: listone = [1, 2, 3] listtwo = [4, 5, 6] Expected outcome: >>> joinedlist [1, 2, 3, 4, 5, 6]
Use the + operator to combine the lists: listone = [1, 2, 3] listtwo = [4, 5, 6] joinedlist = listone + listtwo Output: >>> joinedlist [1, 2, 3, 4, 5, 6] NOTE: This will create a new list with a shallow copy of the items in the first list, followed by a shallow copy of the items in the second list. Use copy.deepcopy() to get deep copies of lists.
252,703
3,109
1,225,432,536
["python","list","data-structures","append","extend"]
What is the difference between Python's list methods append and extend? What's the difference between the list methods append() and extend()?
.append() appends a single object at the end of the list: >>> x = [1, 2, 3] >>> x.append([4, 5]) >>> print(x) [1, 2, 3, [4, 5]] .extend() appends multiple objects that are taken from inside the specified iterable: >>> x = [1, 2, 3] >>> x.extend([4, 5]) >>> print(x) [1, 2, 3, 4, 5]
11,277,432
3,013
1,341,088,079
["python","dictionary","unset"]
How can I remove a key from a Python dictionary? I want to remove a key from a dictionary if it is present. I currently use this code: if key in my_dict: del my_dict[key] Without the if statement, the code will raise KeyError if the key is not present. How can I handle this more simply? See Delete an element from a dictionary for more general approaches to the problem of removing a key from a dict (including ones which produce a modified copy).
To delete a key regardless of whether it is in the dictionary, use the two-argument form of dict.pop(): my_dict.pop('key', None) This will return my_dict[key] if key exists in the dictionary, and None otherwise. If the second parameter is not specified (i.e. my_dict.pop('key')) and key does not exist, a KeyError is raised. To delete a key that is guaranteed to exist, you can also use del my_dict['key'] This will raise a KeyError if the key is not in the dictionary.
2,720,014
2,887
1,272,360,205
["python","pip","pypi"]
How to upgrade all Python packages with pip Is it possible to upgrade all Python packages at one time with pip? Note: that there is a feature request for this on the official issue tracker.
There isn't a built-in flag yet. Starting with pip version 22.3, the --outdated and --format=freeze have become mutually exclusive. Use Python, to parse the JSON output: pip --disable-pip-version-check list --outdated --format=json | python -c "import json, sys; print('\n'.join([x['name'] for x in json.load(sys.stdin)]))" | xargs -n1 pip install -U If you are using pip<22.3 you can use: pip list --outdated --format=freeze | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 pip install -U For older versions of pip: pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 pip install -U The grep is to skip editable ("-e") package definitions, as suggested by @jawache. (Yes, you could replace grep+cut with sed or awk or perl or...). The -n1 flag for xargs prevents stopping everything if updating one package fails (thanks @andsens). Note: there are infinite potential variations for this. I'm trying to keep this answer short and simple, but please do suggest variations in the comments!
72,899
2,856
1,221,575,267
["python","list","sorting","dictionary","data-structures"]
How can I sort a list of dictionaries by a value of the dictionary in Python? How do I sort a list of dictionaries by a specific key's value? Given: [{'name': 'Homer', 'age': 39}, {'name': 'Bart', 'age': 10}] When sorted by name, it should become: [{'name': 'Bart', 'age': 10}, {'name': 'Homer', 'age': 39}]
The sorted() function takes a key= parameter newlist = sorted(list_to_be_sorted, key=lambda d: d['name']) Alternatively, you can use operator.itemgetter instead of defining the function yourself from operator import itemgetter newlist = sorted(list_to_be_sorted, key=itemgetter('name')) For completeness, add reverse=True to sort in descending order newlist = sorted(list_to_be_sorted, key=itemgetter('name'), reverse=True)
379,906
2,778
1,229,651,546
["python","parsing","floating-point","type-conversion","integer"]
How do I parse a string to a float or int? How can I convert an str to a float? "545.2222" -> 545.2222 Or an str to a int? "31" -> 31 For the reverse, see Convert integer to string in Python and Converting a float to a string without rounding it. Please instead use How can I read inputs as numbers? to close duplicate questions where OP received a string from user input and immediately wants to convert it, or was hoping for input (in 3.x) to convert the type automatically.
>>> a = "545.2222" >>> float(a) 545.22220000000004 >>> int(float(a)) 545
1,602,934
2,675
1,256,151,909
["python","dictionary"]
Check if a given key already exists in a dictionary I wanted to test if a key exists in a dictionary before updating the value for the key. I wrote the following code: if 'key1' in dict.keys(): print "blah" else: print "boo" I think this is not the best way to accomplish this task. Is there a better way to test for a key in the dictionary?
in tests for the existence of a key in a dict: d = {"key1": 10, "key2": 23} if "key1" in d: print("this will execute") if "nonexistent key" in d: print("this will not") Use dict.get() to provide a default value when the key does not exist: d = {} for i in range(100): key = i % 10 d[key] = d.get(key, 0) + 1 To provide a default value for every key, either use dict.setdefault() on each assignment: d = {} for i in range(100): d[i % 10] = d.setdefault(i % 10, 0) + 1 ...or better, use defaultdict from the collections module: from collections import defaultdict d = defaultdict(int) for i in range(100): d[i % 10] += 1
610,883
2,534
1,236,177,959
["python","class","object","attributes","attributeerror"]
How can I check if an object has an attribute? How do I check if an object has some attribute? For example: >>> a = SomeClass() >>> a.property Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: SomeClass instance has no attribute 'property' How do I tell if a has the attribute property before using it?
Try hasattr(): if hasattr(a, 'property'): a.property See zweiterlinde's answer, which offers good advice about asking forgiveness! It is a very Pythonic approach! The general practice in Python is that, if the property is likely to be there most of the time, simply call it and either let the exception propagate, or trap it with a try/except block. This will likely be faster than hasattr. If the property is likely to not be there most of the time, or you're not sure, using hasattr will probably be faster than repeatedly falling into an exception block.
12,943,819
2,053
1,350,509,923
["python","json","formatting","pretty-print"]
How to prettyprint a JSON file? How do I pretty-print a JSON file in Python?
Use the indent= parameter of json.dump() or json.dumps() to specify how many spaces to indent by: >>> import json >>> your_json = '["foo", {"bar": ["baz", null, 1.0, 2]}]' >>> parsed = json.loads(your_json) >>> print(json.dumps(parsed, indent=4)) [ "foo", { "bar": [ "baz", null, 1.0, 2 ] } ] To parse a file, use json.load(): with open('filename.txt', 'r') as handle: parsed = json.load(handle)
354,038
2,022
1,228,853,022
["python","casting","floating-point","type-conversion","integer"]
How do I check if a string represents a number (float or int)? How do I check if a string represents a numeric value in Python? def is_number(s): try: float(s) return True except ValueError: return False The above works, but it seems clunky. Editor's note: If what you are testing comes from user input, it is still a string even if it represents an int or a float. For converting the input, see How can I read inputs as numbers? For ensuring that the input represents an int or float (or other requirements) before proceeding, see Asking the user for input until they give a valid response
Which, not only is ugly and slow I'd dispute both. A regex or other string parsing method would be uglier and slower. I'm not sure that anything much could be faster than the above. It calls the function and returns. Try/except doesn't introduce much overhead because the most common exception is caught without an extensive search of stack frames. The issue across programming languages is that any numeric conversion function has two kinds of results: A number, if the number is valid; A status code (e.g., via errno) or exception to show that no valid number could be parsed. C (as an example) hacks around this a number of ways. Python lays it out clearly and explicitly. I think your code for doing this is just fine. The only thing that could be cleaner is moving the return True into an else block, to be clear that it's not part of the code under test – not that there's much ambiguity. def is_number(s): try: float(s) except ValueError: # Failed return False else: # Succeeded return True
7,225,900
1,990
1,314,589,984
["python","pip","virtualenv","requirements.txt"]
How can I install packages using pip according to the requirements.txt file from a local directory? Here is the problem: I have a requirements.txt file that looks like: BeautifulSoup==3.2.0 Django==1.3 Fabric==1.2.0 Jinja2==2.5.5 PyYAML==3.09 Pygments==1.4 SQLAlchemy==0.7.1 South==0.7.3 amqplib==0.6.1 anyjson==0.3 ... I have a local archive directory containing all the packages + others. I have created a new virtualenv with bin/virtualenv testing Upon activating it, I tried to install the packages according to requirements.txt from the local archive directory. source bin/activate pip install -r /path/to/requirements.txt -f file:///path/to/archive/ I got some output that seems to indicate that the installation is fine: Downloading/unpacking Fabric==1.2.0 (from -r ../testing/requirements.txt (line 3)) Running setup.py egg_info for package Fabric warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no files found matching 'fabfile.py' Downloading/unpacking South==0.7.3 (from -r ../testing/requirements.txt (line 8)) Running setup.py egg_info for package South .... But a later check revealed that none of the packages are installed properly. I cannot import the packages, and none are found in the site-packages directory of my virtualenv. So what went wrong?
This works for me: pip install -r requirements.txt --no-index --find-links file:///tmp/packages --no-index - Ignore package index (only look at --find-links URLs instead). -f, --find-links <URL> - If <URL> is a URL or a path to an HTML file, then parse for links to archives. If <URL> is a local path or a file:// URL that's a directory, then look for archives in the directory listing.
14,132,789
1,899
1,357,185,040
["python","python-import","relative-path","python-packaging","relative-import"]
Relative imports for the billionth time I've been here: PEP 328 – Imports: Multi-Line and Absolute/Relative Modules, Packages Python packages: relative imports Python relative import example code does not work Relative imports in Python 2.5 Relative imports in Python Python: Disabling relative import and plenty of URLs that I did not copy, some on SO, some on other sites, back when I thought I'd have the solution quickly. The forever-recurring question is this: how do I solve this "Attempted relative import in non-package" message? ImportError: attempted relative import with no known parent package I built an exact replica of the package on pep-0328: package/ __init__.py subpackage1/ __init__.py moduleX.py moduleY.py subpackage2/ __init__.py moduleZ.py moduleA.py The imports were done from the console. I did make functions named spam and eggs in their appropriate modules. Naturally, it didn't work. The answer is apparently in the 4th URL I listed, but it's all alumni to me. There was this response on one of the URLs I visited: Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system. The above response looks promising, but it's all hieroglyphs to me. How do I make Python not return to me "Attempted relative import in non-package"? It has an answer that involves -m, supposedly. Why does Python give that error message? What does by "non-package" mean? Why and how do you define a 'package'?
Script vs. Module Here's an explanation. The short version is that there is a big difference between directly running a Python file, and importing that file from somewhere else. Just knowing what directory a file is in does not determine what package Python thinks it is in. That depends, additionally, on how you load the file into Python (by running or by importing). There are two ways to load a Python file: as the top-level script, or as a module. A file is loaded as the top-level script if you execute it directly, for instance by typing python myfile.py on the command line. It is loaded as a module when an import statement is encountered inside some other file. There can only be one top-level script at a time; the top-level script is the Python file you ran to start things off. Naming When a file is loaded, it is given a name (which is stored in its __name__ attribute). If it was loaded as the top-level script, its name is __main__. If it was loaded as a module, its name is the filename, preceded by the names of any packages/subpackages of which it is a part, separated by dots. So for instance in your example: package/ __init__.py subpackage1/ __init__.py moduleX.py moduleA.py if you imported moduleX (note: imported, not directly executed), its name would be package.subpackage1.moduleX. If you imported moduleA, its name would be package.moduleA. However, if you directly run moduleX from the command line, its name will instead be __main__, and if you directly run moduleA from the command line, its name will be __main__. When a module is run as the top-level script, it loses its normal name and its name is instead __main__. Accessing a module NOT through its containing package There is an additional wrinkle: the module's name depends on whether it was imported "directly" from the directory it is in or imported via a package. This only makes a difference if you run Python in a directory, and try to import a file in that same directory (or a subdirectory of it). For instance, if you start the Python interpreter in the directory package/subpackage1 and then do import moduleX, the name of moduleX will just be moduleX, and not package.subpackage1.moduleX. This is because Python adds the current directory to its search path when the interpreter is entered interactively; if it finds the to-be-imported module in the current directory, it will not know that that directory is part of a package, and the package information will not become part of the module's name. A special case is if you run the interpreter interactively (e.g., just type python and start entering Python code on the fly). In this case, the name of that interactive session is __main__. Now here is the crucial thing for your error message: if a module's name has no dots, it is not considered to be part of a package. It doesn't matter where the file actually is on disk. All that matters is what its name is, and its name depends on how you loaded it. Now look at the quote you included in your question: Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top-level module, regardless of where the module is actually located on the file system. Relative imports... Relative imports use the module's name to determine where it is in a package. When you use a relative import like from .. import foo, the dots indicate to step up some number of levels in the package hierarchy. For instance, if your current module's name is package.subpackage1.moduleX, then ..moduleA would mean package.moduleA. For a from .. import to work, the module's name must have at least as many dots as there are in the import statement. ... are only relative in a package However, if your module's name is __main__, it is not considered to be in a package. Its name has no dots, and therefore you cannot use from .. import statements inside it. If you try to do so, you will get the "relative-import in non-package" error. Scripts can't import relative What you probably did is you tried to run moduleX or the like from the command line. When you did this, its name was set to __main__, which means that relative imports within it will fail, because its name does not reveal that it is in a package. Note that this will also happen if you run Python from the same directory where a module is, and then try to import that module, because, as described above, Python will find the module in the current directory "too early" without realizing it is part of a package. Also remember that when you run the interactive interpreter, the "name" of that interactive session is always __main__. Thus you cannot do relative imports directly from an interactive session. Relative imports are only for use within module files. Two solutions: If you really do want to run moduleX directly, but you still want it to be considered part of a package, you can do python -m package.subpackage1.moduleX. The -m tells Python to load it as a module, not as the top-level script. Or perhaps you don't actually want to run moduleX, you just want to run some other script, say myfile.py, that uses functions inside moduleX. If that is the case, put myfile.py somewhere else – not inside the package directory – and run it. If inside myfile.py you do things like from package.moduleA import spam, it will work fine. Notes For either of these solutions, the package directory (package in your example) must be accessible from the Python module search path (sys.path). If it is not, you will not be able to use anything in the package reliably at all. Since Python 2.6, the module's "name" for package-resolution purposes is determined not just by its __name__ attributes but also by the __package__ attribute. That's why I'm avoiding using the explicit symbol __name__ to refer to the module's "name". Since Python 2.6 a module's "name" is effectively __package__ + '.' + __name__, or just __name__ if __package__ is None.)
16,981,921
1,885
1,370,600,810
["python","python-3.x","python-import"]
Relative imports in Python 3 I want to import a function from another file in the same directory. Usually, one of the following works: from .mymodule import myfunction from mymodule import myfunction ...but the other one gives me one of these errors: ImportError: attempted relative import with no known parent package ModuleNotFoundError: No module named 'mymodule' SystemError: Parent module '' not loaded, cannot perform relative import Why is this?
unfortunately, this module needs to be inside the package, and it also needs to be runnable as a script, sometimes. Any idea how I could achieve that? It's quite common to have a layout like this... main.py mypackage/ __init__.py mymodule.py myothermodule.py ...with a mymodule.py like this... #!/usr/bin/env python3 # Exported function def as_int(a): return int(a) # Test function for module def _test(): assert as_int('1') == 1 if __name__ == '__main__': _test() ...a myothermodule.py like this... #!/usr/bin/env python3 from .mymodule import as_int # Exported function def add(a, b): return as_int(a) + as_int(b) # Test function for module def _test(): assert add('1', '1') == 2 if __name__ == '__main__': _test() ...and a main.py like this... #!/usr/bin/env python3 from mypackage.myothermodule import add def main(): print(add('1', '1')) if __name__ == '__main__': main() ...which works fine when you run main.py or mypackage/mymodule.py, but fails with mypackage/myothermodule.py, due to the relative import... from .mymodule import as_int The way you're supposed to run it is... python3 -m mypackage.myothermodule ...but it's somewhat verbose, and doesn't mix well with a shebang line like #!/usr/bin/env python3. The simplest fix for this case, assuming the name mymodule is globally unique, would be to avoid using relative imports, and just use... from mymodule import as_int ...although, if it's not unique, or your package structure is more complex, you'll need to include the directory containing your package directory in PYTHONPATH, and do it like this... from mypackage.mymodule import as_int ...or if you want it to work "out of the box", you can frob the PYTHONPATH in code first with this... import sys import os SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) sys.path.append(os.path.dirname(SCRIPT_DIR)) from mypackage.mymodule import as_int It's kind of a pain, but there's a clue as to why in an email written by a certain Guido van Rossum... I'm -1 on this and on any other proposed twiddlings of the __main__ machinery. The only use case seems to be running scripts that happen to be living inside a module's directory, which I've always seen as an antipattern. To make me change my mind you'd have to convince me that it isn't. Whether running scripts inside a package is an antipattern or not is subjective, but personally I find it really useful in a package I have which contains some custom wxPython widgets, so I can run the script for any of the source files to display a wx.Frame containing only that widget for testing purposes.
67,631
1,872
1,221,517,855
["python","python-import","python-module"]
How can I import a module dynamically given the full path? How do I load a Python module given its full path? Note that the file can be anywhere in the filesystem where the user has access rights. See also: How to import a module given its name as string?
Let's have MyClass in module.name module defined at /path/to/file.py. Below is how we import MyClass from this module For Python 3.5+ use (docs): import importlib.util import sys spec = importlib.util.spec_from_file_location("module.name", "/path/to/file.py") foo = importlib.util.module_from_spec(spec) sys.modules["module.name"] = foo spec.loader.exec_module(foo) foo.MyClass() For Python 3.3 and 3.4 use: from importlib.machinery import SourceFileLoader foo = SourceFileLoader("module.name", "/path/to/file.py").load_module() foo.MyClass() (Although this has been deprecated in Python 3.4.) For Python 2 use: import imp foo = imp.load_source('module.name', '/path/to/file.py') foo.MyClass() There are equivalent convenience functions for compiled Python files and DLLs. See also http://bugs.python.org/issue21436.
5,574,702
1,817
1,302,137,950
["python","printing","stderr"]
How do I print to stderr in Python? There are several ways to write to stderr: print >> sys.stderr, "spam" # Python 2 only. sys.stderr.write("spam\n") os.write(2, b"spam\n") from __future__ import print_function print("spam", file=sys.stderr) What are the differences between these methods? Which method should be preferred?
I found this to be the only one short, flexible, portable and readable: import sys def eprint(*args, **kwargs): print(*args, file=sys.stderr, **kwargs) The optional function eprint saves some repetition. It can be used in the same way as the standard print function: >>> print("Test") Test >>> eprint("Test") Test >>> eprint("foo", "bar", "baz", sep="---") foo---bar---baz
6,760,685
1,809
1,311,158,877
["python","singleton","decorator","base-class","metaclass"]
What is the best way of implementing a singleton in Python? I have multiple classes which would become singletons (my use case is for a logger, but this is not important). I do not wish to clutter several classes with added gumph when I can simply inherit or decorate. Best methods: Method 1: A decorator def singleton(class_): instances = {} def getinstance(*args, **kwargs): if class_ not in instances: instances[class_] = class_(*args, **kwargs) return instances[class_] return getinstance @singleton class MyClass(BaseClass): pass Pros Decorators are additive in a way that is often more intuitive than multiple inheritance. Cons While objects created using MyClass() would be true singleton objects, MyClass itself is a function, not a class, so you cannot call class methods from it. Also for x = MyClass(); y = MyClass(); t = type(n)(); then x == y but x != t && y != t Method 2: A base class class Singleton(object): _instance = None def __new__(class_, *args, **kwargs): if not isinstance(class_._instance, class_): class_._instance = object.__new__(class_, *args, **kwargs) return class_._instance class MyClass(Singleton, BaseClass): pass Pros It's a true class Cons Multiple inheritance - eugh! __new__ could be overwritten during inheritance from a second base class? One has to think more than is necessary. Method 3: A metaclass class Singleton(type): _instances = {} def __call__(cls, *args, **kwargs): if cls not in cls._instances: cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) return cls._instances[cls] #Python2 class MyClass(BaseClass): __metaclass__ = Singleton #Python3 class MyClass(BaseClass, metaclass=Singleton): pass Pros It's a true class Auto-magically covers inheritance Uses __metaclass__ for its proper purpose (and made me aware of it) Cons Are there any? Method 4: decorator returning a class with the same name def singleton(class_): class class_w(class_): _instance = None def __new__(class_, *args, **kwargs): if class_w._instance is None: class_w._instance = super(class_w, class_).__new__(class_, *args, **kwargs) class_w._instance._sealed = False return class_w._instance def __init__(self, *args, **kwargs): if self._sealed: return super(class_w, self).__init__(*args, **kwargs) self._sealed = True class_w.__name__ = class_.__name__ return class_w @singleton class MyClass(BaseClass): pass Pros It's a true class Auto-magically covers inheritance Cons Is there not an overhead for creating each new class? Here we are creating two classes for each class we wish to make a singleton. While this is fine in my case, I worry that this might not scale. Of course there is a matter of debate as to whether it aught to be too easy to scale this pattern... What is the point of the _sealed attribute Can't call methods of the same name on base classes using super() because they will recurse. This means you can't customize __new__ and can't subclass a class that needs you to call up to __init__. Method 5: a module a module file singleton.py Pros Simple is better than complex Cons Not lazily instantiated This question is not for the discussion of whether or not the singleton design pattern is desirable, is an anti-pattern, or for any religious wars, but to discuss how this pattern is best implemented in Python in such a way that is most Pythonic. In this instance I define 'most Pythonic' to mean that it follows the 'principle of least astonishment'.
You just need a decorator, different depending on the python version. Notice how foo gets printed only once. Python 3.2+ Implementation: from functools import lru_cache @lru_cache(maxsize=None) class CustomClass(object): def __init__(self, arg): print(f"CustomClass initialised with {arg}") self.arg = arg Usage c1 = CustomClass("foo") c2 = CustomClass("foo") c3 = CustomClass("bar") print(c1 == c2) print(c1 == c3) Output >>> CustomClass initialised with foo >>> CustomClass initialised with bar >>> True >>> False Python 3.9+ Implementation: from functools import cache @cache class CustomClass(object): ...
678,236
1,789
1,237,912,863
["python","string","path"]
How do I get the filename without the extension from a path in Python? How do I get the filename without the extension from a path in Python? "/path/to/some/file.txt" → "file"
Python 3.4+ Use pathlib.Path.stem >>> from pathlib import Path >>> Path("/path/to/file.txt").stem 'file' >>> Path("/path/to/file.tar.gz").stem 'file.tar' Python < 3.4 Use os.path.splitext in combination with os.path.basename: >>> os.path.splitext(os.path.basename("/path/to/file.txt"))[0] 'file' >>> os.path.splitext(os.path.basename("/path/to/file.tar.gz"))[0] 'file.tar'
582,336
1,758
1,235,491,286
["python","performance","optimization","time-complexity","profiling"]
How do I profile a Python script? Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With Python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to __main__. What is a good way to profile how long a Python program takes to run?
Python includes a profiler called cProfile. It not only gives the total running time, but also times each function separately, and tells you how many times each function was called, making it easy to determine where you should make optimizations. You can call it from within your code, or from the interpreter, like this: import cProfile cProfile.run('foo()') Even more usefully, you can invoke cProfile when running a script: python -m cProfile myscript.py Or when running a module: python -m cProfile -m mymodule To make it even easier, I made a little batch file called 'profile.bat': python -m cProfile %1 So all I have to do is run: profile euler048.py And I get this: 1007 function calls in 0.061 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.061 0.061 <string>:1(<module>) 1000 0.051 0.000 0.051 0.000 euler048.py:2(<lambda>) 1 0.005 0.005 0.061 0.061 euler048.py:2(<module>) 1 0.000 0.000 0.061 0.061 {execfile} 1 0.002 0.002 0.053 0.053 {map} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects} 1 0.000 0.000 0.000 0.000 {range} 1 0.003 0.003 0.003 0.003 {sum} For more information, check out this tutorial from PyCon 2013 titled Python Profiling Also via YouTube.
4,700,614
1,723
1,295,107,803
["python","matplotlib","seaborn","legend"]
How to put the legend outside the plot I have a series of 20 plots (not subplots) to be made in a single figure. I want the legend to be outside of the box. At the same time, I do not want to change the axes, as the size of the figure gets reduced. I want to keep the legend box outside the plot area (I want the legend to be outside at the right side of the plot area). Is there a way to reduce the font size of the text inside the legend box, so that the size of the legend box will be small?
You can make the legend text smaller by specifying set_size of FontProperties. Resources: Legend guide matplotlib.legend matplotlib.pyplot.legend matplotlib.font_manager set_size(self, size) Valid font size are xx-small, x-small, small, medium, large, x-large, xx-large, larger, smaller, and None. Real Python: Python Plotting With Matplotlib (Guide) import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties fontP = FontProperties() fontP.set_size('xx-small') p1, = plt.plot([1, 2, 3], label='Line 1') p2, = plt.plot([3, 2, 1], label='Line 2') plt.legend(handles=[p1, p2], title='title', bbox_to_anchor=(1.05, 1), loc='upper left', prop=fontP) fontsize='xx-small' also works, without importing FontProperties. plt.legend(handles=[p1, p2], title='title', bbox_to_anchor=(1.05, 1), loc='upper left', fontsize='xx-small')
60,208
1,716
1,221,266,190
["python","switch-statement"]
Replacements for switch statement in Python? I want to write a function in Python that returns different fixed values based on the value of an input index. In other languages I would use a switch or case statement, but Python does not appear to have a switch statement. What are the recommended Python solutions in this scenario?
Python 3.10 (2021) introduced the match-case statement, which provides a first-class implementation of a "switch" for Python. For example: def f(x): match x: case 'a': return 1 case 'b': return 2 case _: return 0 # 0 is the default case if x is not found The match-case statement is considerably more powerful than this simple example. Documentation: match statements (under the "More Control Flow Tools" page) The match statement (under "Compound statements" page) PEP 634 – Structural Pattern Matching: Specification PEP 636 – Structural Pattern Matching: Tutorial If you need to support Python ≤ 3.9, use a dictionary instead: def f(x): return { 'a': 1, 'b': 2, }.get(x, 0) # default case
1,471,994
1,656
1,253,801,812
["python","pypi","setup.py","python-packaging"]
What is setup.py? What is setup.py and how can it be configured or used?
setup.py is a Python file, the presence of which is an indication that the module/package you are about to install has likely been packaged and distributed with Distutils, which is the standard for distributing Python Modules. This allows you to easily install Python packages. Often it's enough to write: $ pip install . pip will use setup.py to install your module. Avoid calling setup.py directly.
1,747,817
1,612
1,258,452,473
["python","dictionary","list-comprehension","dictionary-comprehension"]
Create a dictionary with comprehension Can I use list comprehension syntax to create a dictionary? For example, by iterating over pairs of keys and values: d = {... for k, v in zip(keys, values)}
Use a dict comprehension (Python 2.7 and later): {key: value for key, value in zip(keys, values)} Alternatively, use the dict constructor: pairs = [('a', 1), ('b', 2)] dict(pairs) # → {'a': 1, 'b': 2} dict((k, v + 10) for k, v in pairs) # → {'a': 11, 'b': 12} Given separate lists of keys and values, use the dict constructor with zip: keys = ['a', 'b'] values = [1, 2] dict(zip(keys, values)) # → {'a': 1, 'b': 2}
961,632
1,595
1,244,370,158
["python","string","integer"]
Convert integer to string in Python How do I convert an integer to a string? 42 ⟶ "42" For the reverse, see How do I parse a string to a float or int?. Floats can be handled similarly, but handling the decimal points can be tricky because floating-point values are not precise. See Converting a float to a string without rounding it for more specific advice.
>>> str(42) '42' >>> int('42') 42 Links to the documentation: int() str() str(x) converts any object x to a string by calling x.__str__(), or repr(x) if x doesn't have a __str__() method.
8,858,008
1,533
1,326,493,078
["python","file","file-handling","python-os"]
How do I move a file in Python? How can I do the equivalent of mv in Python? mv "path/to/current/file.foo" "path/to/new/destination/for/file.foo"
os.rename(), os.replace(), or shutil.move() All employ the same syntax: import os import shutil os.rename("path/to/current/file.foo", "path/to/new/destination/for/file.foo") os.replace("path/to/current/file.foo", "path/to/new/destination/for/file.foo") shutil.move("path/to/current/file.foo", "path/to/new/destination/for/file.foo") The filename ("file.foo") must be included in both the source and destination arguments. If it differs between the two, the file will be renamed as well as moved. The directory within which the new file is being created must already exist. On Windows, a file with that name must not exist or an exception will be raised, but os.replace() will silently replace a file even in that occurrence. shutil.move simply calls os.rename in most cases. However, if the destination is on a different disk than the source, it will instead copy and then delete the source file.
9,942,594
1,518
1,333,109,201
["python","unicode","beautifulsoup","python-2.x","python-unicode"]
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 20: ordinal not in range(128) I'm having problems dealing with unicode characters from text fetched from different web pages (on different sites). I am using BeautifulSoup. The problem is that the error is not always reproducible; it sometimes works with some pages, and sometimes, it barfs by throwing a UnicodeEncodeError. I have tried just about everything I can think of, and yet I have not found anything that works consistently without throwing some kind of Unicode-related error. One of the sections of code that is causing problems is shown below: agent_telno = agent.find('div', 'agent_contact_number') agent_telno = '' if agent_telno is None else agent_telno.contents[0] p.agent_info = str(agent_contact + ' ' + agent_telno).strip() Here is a stack trace produced on SOME strings when the snippet above is run: Traceback (most recent call last): File "foobar.py", line 792, in <module> p.agent_info = str(agent_contact + ' ' + agent_telno).strip() UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 20: ordinal not in range(128) I suspect that this is because some pages (or more specifically, pages from some of the sites) may be encoded, whilst others may be unencoded. All the sites are based in the UK and provide data meant for UK consumption - so there are no issues relating to internalization or dealing with text written in anything other than English. Does anyone have any ideas as to how to solve this so that I can CONSISTENTLY fix this problem?
Read the Python Unicode HOWTO. This error is the very first example. Do not use str() to convert from unicode to encoded text / bytes. Instead, use .encode() to encode the string: p.agent_info = u' '.join((agent_contact, agent_telno)).encode('utf-8').strip() or work entirely in unicode.
3,768,895
1,510
1,285,156,339
["python","json","serialization"]
How to make a class JSON serializable How to make a Python class serializable? class FileItem: def __init__(self, fname): self.fname = fname Attempt to serialize to JSON: >>> import json >>> x = FileItem('/foo/bar') >>> json.dumps(x) TypeError: Object of type 'FileItem' is not JSON serializable
Do you have an idea about the expected output? For example, will this do? >>> f = FileItem("/foo/bar") >>> magic(f) '{"fname": "/foo/bar"}' In that case you can merely call json.dumps(f.__dict__). If you want more customized output then you will have to subclass JSONEncoder and implement your own custom serialization. For a trivial example, see below. >>> from json import JSONEncoder >>> class MyEncoder(JSONEncoder): def default(self, o): return o.__dict__ >>> MyEncoder().encode(f) '{"fname": "/foo/bar"}' Then you pass this class into the json.dumps() method as cls kwarg: json.dumps(cls=MyEncoder) If you also want to decode then you'll have to supply a custom object_hook to the JSONDecoder class. For example: >>> def from_json(json_object): if 'fname' in json_object: return FileItem(json_object['fname']) >>> f = JSONDecoder(object_hook = from_json).decode('{"fname": "/foo/bar"}') >>> f <__main__.FileItem object at 0x9337fac> >>>
31,684,375
1,508
1,438,108,143
["python","dependencies","python-import","requirements.txt"]
Automatically create file 'requirements.txt' Sometimes I download the Python source code from GitHub and don't know how to install all the dependencies. If there isn't any requirements.txt file I have to create it by hand. Given the Python source code directory, is it possible to create requirements.txt automatically from the import section?
Use Pipenv or other tools is recommended for improving your development flow. pip3 freeze > requirements.txt # Python3 pip freeze > requirements.txt # Python2 If you do not use a virtual environment, pigar will be a good choice for you.
1,952,464
1,469
1,261,570,435
["python","iterable"]
Python: how to determine if an object is iterable? Is there a method like isiterable? The only solution I have found so far is to call: hasattr(myObj, '__iter__') but I am not sure how foolproof this is.
Checking for __iter__ works on sequence types, but it would fail on e.g. strings in Python 2. I would like to know the right answer too, until then, here is one possibility (which would work on strings, too): try: some_object_iterator = iter(some_object) except TypeError as te: print(some_object, 'is not iterable') The iter built-in checks for the __iter__ method or in the case of strings the __getitem__ method. Another general pythonic approach is to assume an iterable, then fail gracefully if it does not work on the given object. The Python glossary: Pythonic programming style that determines an object's type by inspection of its method or attribute signature rather than by explicit relationship to some type object ("If it looks like a duck and quacks like a duck, it must be a duck.") By emphasizing interfaces rather than specific types, well-designed code improves its flexibility by allowing polymorphic substitution. Duck-typing avoids tests using type() or isinstance(). Instead, it typically employs the EAFP (Easier to Ask Forgiveness than Permission) style of programming. ... try: _ = (e for e in my_object) except TypeError: print(my_object, 'is not iterable') The collections module provides some abstract base classes, which allow to ask classes or instances if they provide particular functionality, for example: from collections.abc import Iterable if isinstance(e, Iterable): # e is iterable However, this does not check for classes that are iterable through __getitem__.
17,330,160
1,427
1,372,279,635
["python","properties","decorator","python-decorators","python-internals"]
How does the @property decorator work in Python? I would like to understand how the built-in function property works. What confuses me is that property can also be used as a decorator, but it only takes arguments when used as a built-in function and not when used as a decorator. This example is from the documentation: class C: def __init__(self): self._x = None def getx(self): return self._x def setx(self, value): self._x = value def delx(self): del self._x x = property(getx, setx, delx, "I'm the 'x' property.") property's arguments are getx, setx, delx and a doc string. In the code below property is used as a decorator. The object of it is the x function, but in the code above there is no place for an object function in the arguments. class C: def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x How are the x.setter and x.deleter decorators created in this case?
The property() function returns a special descriptor object: >>> property() <property object at 0x10ff07940> It is this object that has extra methods: >>> property().getter <built-in method getter of property object at 0x10ff07998> >>> property().setter <built-in method setter of property object at 0x10ff07940> >>> property().deleter <built-in method deleter of property object at 0x10ff07998> These act as decorators too. They return a new property object: >>> property().getter(None) <property object at 0x10ff079f0> that is a copy of the old object, but with one of the functions replaced. Remember, that the @decorator syntax is just syntactic sugar; the syntax: @property def foo(self): return self._foo really means the same thing as def foo(self): return self._foo foo = property(foo) so foo the function is replaced by property(foo), which we saw above is a special object. Then when you use @foo.setter(), what you are doing is call that property().setter method I showed you above, which returns a new copy of the property, but this time with the setter function replaced with the decorated method. The following sequence also creates a full-on property, by using those decorator methods. First we create some functions: >>> def getter(self): print('Get!') ... >>> def setter(self, value): print('Set to {!r}!'.format(value)) ... >>> def deleter(self): print('Delete!') ... Then, we create a property object with only a getter: >>> prop = property(getter) >>> prop.fget is getter True >>> prop.fset is None True >>> prop.fdel is None True Next we use the .setter() method to add a setter: >>> prop = prop.setter(setter) >>> prop.fget is getter True >>> prop.fset is setter True >>> prop.fdel is None True Last we add a deleter with the .deleter() method: >>> prop = prop.deleter(deleter) >>> prop.fget is getter True >>> prop.fset is setter True >>> prop.fdel is deleter True Last but not least, the property object acts as a descriptor object, so it has .__get__(), .__set__() and .__delete__() methods to hook into instance attribute getting, setting and deleting: >>> class Foo: pass ... >>> prop.__get__(Foo(), Foo) Get! >>> prop.__set__(Foo(), 'bar') Set to 'bar'! >>> prop.__delete__(Foo()) Delete! The Descriptor Howto includes a pure Python sample implementation of the property() type: class Property: "Emulate PyProperty_Type() in Objects/descrobject.c" def __init__(self, fget=None, fset=None, fdel=None, doc=None): self.fget = fget self.fset = fset self.fdel = fdel if doc is None and fget is not None: doc = fget.__doc__ self.__doc__ = doc def __get__(self, obj, objtype=None): if obj is None: return self if self.fget is None: raise AttributeError("unreadable attribute") return self.fget(obj) def __set__(self, obj, value): if self.fset is None: raise AttributeError("can't set attribute") self.fset(obj, value) def __delete__(self, obj): if self.fdel is None: raise AttributeError("can't delete attribute") self.fdel(obj) def getter(self, fget): return type(self)(fget, self.fset, self.fdel, self.__doc__) def setter(self, fset): return type(self)(self.fget, fset, self.fdel, self.__doc__) def deleter(self, fdel): return type(self)(self.fget, self.fset, fdel, self.__doc__)
8,369,219
1,411
1,322,930,874
["python","string"]
How can I read a text file into a string variable and strip newlines? I have a text file that looks like: ABC DEF How can I read the file into a single-line string without newlines, in this case creating a string 'ABCDEF'? For reading the file into a list of lines, but removing the trailing newline character from each line, see How to read a file without newlines?.
You could use: with open('data.txt', 'r') as file: data = file.read().replace('\n', '') Or if the file content is guaranteed to be one line: with open('data.txt', 'r') as file: data = file.read().rstrip()
11,248,073
1,409
1,340,897,804
["python","pip","virtualenv","python-packaging"]
How do I remove all packages installed by pip? How do I uninstall all packages installed by pip from my currently activated virtual environment?
I've found this snippet as an alternative solution. It's a more graceful removal of libraries than remaking the virtualenv: pip freeze | xargs pip uninstall -y In case you have packages installed via VCS, you need to exclude those lines and remove the packages manually (elevated from the comments below): pip freeze --exclude-editable | xargs pip uninstall -y If you have packages installed directly from github/gitlab, those will have @. Like: django @ git+https://github.com/django.git@<sha> You can add cut -d "@" -f1 to get just the package name that is required to uninstall it. pip freeze | cut -d "@" -f1 | xargs pip uninstall -y
2,802,726
1,405
1,273,496,284
["python","if-statement","syntax","conditional-operator"]
Putting a simple if-then-else statement on one line How do I write an if-then-else statement in Python so that it fits on one line? For example, I want a one line version of: if count == N: count = 0 else: count = N + 1 In Objective-C, I would write this as: count = count == N ? 0 : count + 1;
That's more specifically a ternary operator expression than an if-then, here's the python syntax value_when_true if condition else value_when_false Better Example: (thanks Mr. Burns) 'Yes' if fruit == 'Apple' else 'No' Now with assignment and contrast with if syntax fruit = 'Apple' isApple = True if fruit == 'Apple' else False vs fruit = 'Apple' isApple = False if fruit == 'Apple' : isApple = True
33,533,148
1,383
1,446,675,474
["python","pycharm","python-typing"]
How do I type hint a method with the type of the enclosing class? I have the following code in Python 3: class Position: def __init__(self, x: int, y: int): self.x = x self.y = y def __add__(self, other: Position) -> Position: return Position(self.x + other.x, self.y + other.y) But my editor (PyCharm) says that the reference Position can not be resolved (in the __add__ method). How should I specify that I expect the return type to be of type Position? I think this is actually a PyCharm issue. It actually uses the information in its warnings, and code completion. But correct me if I'm wrong, and need to use some other syntax.
I guess you got this exception: NameError: name 'Position' is not defined This is because in the original implementation of annotations, Position must be defined before you can use it in an annotation. Python 3.14+: It'll just work Python 3.14 has a new, lazily evaluated annotation implementation specified by PEP 749 and 649. Annotations will be compiled to special __annotate__ functions, executed when an object's __annotations__ dict is first accessed instead of at the point where the annotation itself occurs. Thus, annotating your function as def __add__(self, other: Position) -> Position: no longer requires Position to already exist: class Position: def __add__(self, other: Position) -> Position: ... Python 3.7+, deprecated: from __future__ import annotations from __future__ import annotations turns on an older solution to this problem, PEP 563, where all annotations are saved as strings instead of as __annotate__ functions or evaluated values. This was originally planned to become the default behavior, and almost became the default in 3.10 before being reverted. With the acceptance of PEP 749, this will be deprecated in Python 3.14, and it will be removed in a future Python version. Still, it works for now: from __future__ import annotations class Position: def __add__(self, other: Position) -> Position: ... Python 3+: Use a string This is the original workaround, specified in PEP 484. Write your annotations as string literals containing the text of whatever expression you originally wanted to use as an annotation: class Position: def __add__(self, other: 'Position') -> 'Position': ... from __future__ import annotations effectively automates doing this for all annotations in a file. typing.Self might sometimes be appropriate Introduced in Python 3.11, typing.Self refers to the type of the current instance, even if that type is a subclass of the class the annotation appears in. So if you have the following code: from typing import Self class Parent: def me(self) -> Self: return self class Child(Parent): pass x: Child = Child().me() then Child().me() is treated as returning Child, instead of Parent. This isn't always what you want. But when it is, it's pretty convenient. For Python versions < 3.11, if you have typing_extensions installed, you can use: from typing_extensions import Self Sources The relevant parts of PEP 484, PEP 563, and PEP 649, to spare you the trip: Forward references When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later. A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work: class Tree: def __init__(self, left: Tree, right: Tree): self.left = left self.right = right To address this, we write: class Tree: def __init__(self, left: 'Tree', right: 'Tree'): self.left = left self.right = right The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated. and PEP 563, deprecated: Implementation In Python 3.10, function and variable annotations will no longer be evaluated at definition time. Instead, a string form will be preserved in the respective __annotations__ dictionary. Static type checkers will see no difference in behavior, whereas tools using annotations at runtime will have to perform postponed evaluation. ... Enabling the future behavior in Python 3.7 The functionality described above can be enabled starting from Python 3.7 using the following special import: from __future__ import annotations and PEP 649: Overview This PEP adds a new dunder attribute to the objects that support annotations–functions, classes, and modules. The new attribute is called __annotate__, and is a reference to a function which computes and returns that object’s annotations dict. At compile time, if the definition of an object includes annotations, the Python compiler will write the expressions computing the annotations into its own function. When run, the function will return the annotations dict. The Python compiler then stores a reference to this function in __annotate__ on the object. Furthermore, __annotations__ is redefined to be a “data descriptor” which calls this annotation function once and caches the result. Things that you may be tempted to do instead A. Define a dummy Position Before the class definition, place a dummy definition: class Position(object): pass class Position: def __init__(self, x: int, y: int): self.x = x self.y = y def __add__(self, other: Position) -> Position: return Position(self.x + other.x, self.y + other.y) This will get rid of the NameError and may even look OK: >>> Position.__add__.__annotations__ {'other': __main__.Position, 'return': __main__.Position} But is it? >>> for k, v in Position.__add__.__annotations__.items(): ... print(k, 'is Position:', v is Position) return is Position: False other is Position: False And mypy will report a pile of errors: main.py:4: error: Name "Position" already defined on line 1 [no-redef] main.py:11: error: Too many arguments for "Position" [call-arg] main.py:11: error: "Position" has no attribute "x" [attr-defined] main.py:11: error: "Position" has no attribute "y" [attr-defined] Found 4 errors in 1 file (checked 1 source file) B. Monkey-patch in order to add the annotations: You may want to try some Python metaprogramming magic and write a decorator to monkey-patch the class definition in order to add annotations: class Position: ... def __add__(self, other): return self.__class__(self.x + other.x, self.y + other.y) The decorator should be responsible for the equivalent of this: Position.__add__.__annotations__['return'] = Position Position.__add__.__annotations__['other'] = Position It'll work right at runtime: >>> for k, v in Position.__add__.__annotations__.items(): ... print(k, 'is Position:', v is Position) return is Position: True other is Position: True But static analyzers like mypy won't understand it, and static analysis is the biggest use case of type annotations.
2,709,821
1,360
1,272,226,948
["python","class","oop","self"]
What is the purpose of the `self` parameter? Why is it needed? Consider this example: class MyClass: def func(self, name): self.name = name I know that self refers to the specific instance of MyClass. But why must func explicitly include self as a parameter? Why do we need to use self in the method's code? Some other languages make this implicit, or use special syntax instead. For a language-agnostic consideration of the design decision, see What is the advantage of having this/self pointer mandatory explicit?. To close debugging questions where OP omitted a self parameter for a method and got a TypeError, use TypeError: method() takes 1 positional argument but 2 were given instead. If OP omitted self. in the body of the method and got a NameError, consider How can I call a function within a class?.
The reason you need to use self is because Python does not use special syntax to refer to instance attributes. Python decided to do methods in a way that makes the instance to which the method belongs be passed automatically but not received automatically, the first parameter of methods is the instance the method is called on. That makes methods entirely the same as functions and leaves the actual name to use up to you (although self is the convention, and people will generally frown at you when you use something else.) self is not special to the code, it's just another object. Python could have done something else to distinguish normal names from attributes -- special syntax like Ruby has, or requiring declarations like C++ and Java do, or perhaps something yet more different -- but it didn't. Python's all for making things explicit, making it obvious what's what, and although it doesn't do it entirely everywhere, it does do it for instance attributes. That's why assigning to an instance attribute needs to know what instance to assign to, and that's why it needs self.
715,417
1,358
1,238,787,860
["python","string","boolean"]
Converting from a string to boolean in Python How do I convert a string into a boolean in Python? This attempt returns True: >>> bool("False") True
Really, you just compare the string to whatever you expect to accept as representing true, so you can do this: s == 'True' Or to checks against a whole bunch of values: s.lower() in ['true', '1', 't', 'y', 'yes', 'yeah', 'yup', 'certainly', 'uh-huh'] Be cautious when using the following: >>> bool("foo") True >>> bool("False") # beware! True >>> bool("") False Empty strings evaluate to False, but everything else evaluates to True. So this should not be used for any kind of parsing purposes.
534,839
1,342
1,234,309,881
["python","uuid","guid","uniqueidentifier"]
How to create a GUID/UUID in Python How do I create a GUID/UUID in Python that is platform independent? I hear there is a method using ActivePython on Windows but it's Windows only because it uses COM. Is there a method using plain Python?
The uuid module provides immutable UUID objects (the UUID class) and the functions uuid1(), uuid3(), uuid4(), uuid5() for generating version 1, 3, 4, and 5 UUIDs as specified in RFC 4122. If all you want is a unique ID, you should probably call uuid1() or uuid4(). Note that uuid1() may compromise privacy since it creates a UUID containing the computer’s network address. uuid4() creates a random UUID. UUID versions 6, 7 and 8 - new Universally Unique Identifier (UUID) formats for use in modern applications and as database keys - (draft) rfc - are available from https://pypi.org/project/uuid6/ Docs: Python 2 Python 3 Examples (for both Python 2 and 3): >>> import uuid >>> # make a random UUID >>> uuid.uuid4() UUID('bd65600d-8669-4903-8a14-af88203add38') >>> # Convert a UUID to a string of hex digits in standard form >>> str(uuid.uuid4()) 'f50ec0b7-f960-400d-91f0-c42a6d44e3d0' >>> # Convert a UUID to a 32-character hexadecimal string >>> uuid.uuid4().hex '9fe2c4e93f654fdbb24c02b15259716c'
1,504,717
1,332
1,254,411,614
["python","string","comparison","identity","equality"]
Why does comparing strings using either '==' or 'is' sometimes produce a different result? Two string variables are set to the same value. s1 == s2 always returns True, but s1 is s2 sometimes returns False. If I open my Python interpreter and do the same is comparison, it succeeds: >>> s1 = 'text' >>> s2 = 'text' >>> s1 is s2 True Why is this?
is is identity testing, and == is equality testing. What happens in your code would be emulated in the interpreter like this: >>> a = 'pub' >>> b = ''.join(['p', 'u', 'b']) >>> a == b True >>> a is b False So, no wonder they're not the same, right? In other words: a is b is the equivalent of id(a) == id(b)
845,058
1,323
1,241,950,925
["python","text-files","line-count"]
How to get the line count of a large file cheaply in Python How do I get a line count of a large file in the most memory- and time-efficient manner? def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1
You can't get any better than that. After all, any solution will have to read the entire file, figure out how many \n you have, and return that result. Do you have a better way of doing that without reading the entire file? Not sure... The best solution will always be I/O-bound, best you can do is make sure you don't use unnecessary memory, but it looks like you have that covered. [Edit May 2023] As commented in many other answers, in Python 3 there are better alternatives. The for loop is not the most efficient. For example, using mmap or buffers is more efficient.
739,993
1,322
1,239,453,258
["python","module","pip"]
How do I get a list of locally installed Python modules? How do I get a list of Python modules installed on my computer?
Solution Do not use with pip > 10.0! My 50 cents for getting a pip freeze-like list from a Python script: import pip installed_packages = pip.get_installed_distributions() installed_packages_list = sorted(["%s==%s" % (i.key, i.version) for i in installed_packages]) print(installed_packages_list) As a (too long) one liner: sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed_distributions()]) Giving: ['behave==1.2.4', 'enum34==1.0', 'flask==0.10.1', 'itsdangerous==0.24', 'jinja2==2.7.2', 'jsonschema==2.3.0', 'markupsafe==0.23', 'nose==1.3.3', 'parse-type==0.3.4', 'parse==1.6.4', 'prettytable==0.7.2', 'requests==2.3.0', 'six==1.6.1', 'vioozer-metadata==0.1', 'vioozer-users-server==0.1', 'werkzeug==0.9.4'] Scope This solution applies to the system scope or to a virtual environment scope, and covers packages installed by setuptools, pip and (god forbid) easy_install. My use case I added the result of this call to my Flask server, so when I call it with http://example.com/exampleServer/environment I get the list of packages installed on the server's virtualenv. It makes debugging a whole lot easier. Caveats I have noticed a strange behaviour of this technique - when the Python interpreter is invoked in the same directory as a setup.py file, it does not list the package installed by setup.py. Steps to reproduce: Create a virtual environment $ cd /tmp $ virtualenv test_env New python executable in test_env/bin/python Installing setuptools, pip...done. $ source test_env/bin/activate (test_env) $ Clone a Git repository with setup.py (test_env) $ git clone https://github.com/behave/behave.git Cloning into 'behave'... remote: Reusing existing pack: 4350, done. remote: Total 4350 (delta 0), reused 0 (delta 0) Receiving objects: 100% (4350/4350), 1.85 MiB | 418.00 KiB/s, done. Resolving deltas: 100% (2388/2388), done. Checking connectivity... done. We have behave's setup.py in /tmp/behave: (test_env) $ ls /tmp/behave/setup.py /tmp/behave/setup.py Install the Python package from the Git repository (test_env) $ cd /tmp/behave && pip install . running install ... Installed /private/tmp/test_env/lib/python2.7/site-packages/enum34-1.0-py2.7.egg Finished processing dependencies for behave==1.2.5a1 If we run the aforementioned solution from /tmp >>> import pip >>> sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed_distributions()]) ['behave==1.2.5a1', 'enum34==1.0', 'parse-type==0.3.4', 'parse==1.6.4', 'six==1.6.1'] >>> import os >>> os.getcwd() '/private/tmp' If we run the aforementioned solution from /tmp/behave >>> import pip >>> sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed_distributions()]) ['enum34==1.0', 'parse-type==0.3.4', 'parse==1.6.4', 'six==1.6.1'] >>> import os >>> os.getcwd() '/private/tmp/behave' behave==1.2.5a1 is missing from the second example, because the working directory contains behave's setup.py file. I could not find any reference to this issue in the documentation. Perhaps I shall open a bug for it.
7,604,966
1,319
1,317,344,466
["python","integer"]
Maximum and Minimum values for ints How do I represent minimum and maximum values for integers in Python? In Java, we have Integer.MIN_VALUE and Integer.MAX_VALUE. See also: What is the maximum float in Python?.
Python 3 In Python 3, this question doesn't apply. The plain int type is unbounded. However, you might actually be looking for information about the current interpreter's word size, which will be the same as the machine's word size in most cases. That information is still available in Python 3 as sys.maxsize, which is the maximum value representable by a signed word. Equivalently, it's the size of the largest possible list or in-memory sequence. Generally, the maximum value representable by an unsigned word will be sys.maxsize * 2 + 1, and the number of bits in a word will be math.log2(sys.maxsize * 2 + 2). See this answer for more information. Python 2 In Python 2, the maximum value for plain int values is available as sys.maxint: >>> sys.maxint # on my system, 2**63-1 9223372036854775807 You can calculate the minimum value with -sys.maxint - 1 as shown in the docs. Python seamlessly switches from plain to long integers once you exceed this value. So most of the time, you won't need to know it.
1,823,058
1,309
1,259,622,703
["python","number-formatting"]
How to print a number using commas as thousands separators How do I print an integer with commas as thousands separators? 1234567 ⟶ 1,234,567 It does not need to be locale-specific to decide between periods and commas.
Locale-agnostic: use _ as the thousand separator f'{value:_}' # For Python ≥3.6 Note that this will NOT format in the user's current locale and will always use _ as the thousand separator, so for example: 1234567 ⟶ 1_234_567 English style: use , as the thousand separator '{:,}'.format(value) # For Python ≥2.7 f'{value:,}' # For Python ≥3.6 Locale-aware import locale locale.setlocale(locale.LC_ALL, '') # Use '' for auto, or force e.g. to 'en_US.UTF-8' '{:n}'.format(value) # For Python ≥2.7 f'{value:n}' # For Python ≥3.6 Reference Per Format Specification Mini-Language, The ',' option signals the use of a comma for a thousands separator. For a locale aware separator, use the 'n' integer presentation type instead. and: The '_' option signals the use of an underscore for a thousands separator for floating point presentation types and for integer presentation type 'd'. For integer presentation types 'b', 'o', 'x', and 'X', underscores will be inserted every 4 digits.
1,663,807
1,305
1,257,197,184
["python","list","for-loop","iterator"]
How do I iterate through two lists in parallel? I have two iterables, and I want to go over them in pairs: foo = [1, 2, 3] bar = [4, 5, 6] for (f, b) in iterate_together(foo, bar): print("f:", f, " | b:", b) That should result in: f: 1 | b: 4 f: 2 | b: 5 f: 3 | b: 6 One way to do it is to iterate over the indices: for i in range(len(foo)): print("f:", foo[i], " | b:", bar[i]) But that seems somewhat unpythonic to me. Is there a better way to do it? Related tasks: * How to merge lists into a list of tuples? - given the above foo and bar, create the list [(1, 4), (2, 5), (3, 6)]. * How can I make a dictionary (dict) from separate lists of keys and values? - create the dict {1: 4, 2: 5, 3: 6}. * Create a dictionary with comprehension - constructing dict using zip in a dict comprehension.
Python 3 for f, b in zip(foo, bar): print(f, b) zip stops when the shorter of foo or bar stops. In Python 3, zip returns an iterator of tuples, like itertools.izip in Python2. To get a list of tuples, use list(zip(foo, bar)). And to zip until both iterators are exhausted, you would use itertools.zip_longest. Python 2 In Python 2, zip returns a list of tuples. This is fine when foo and bar are not massive. If they are both massive then forming zip(foo,bar) is an unnecessarily massive temporary variable, and should be replaced by itertools.izip or itertools.izip_longest, which returns an iterator instead of a list. import itertools for f,b in itertools.izip(foo,bar): print(f,b) for f,b in itertools.izip_longest(foo,bar): print(f,b) izip stops when either foo or bar is exhausted. izip_longest stops when both foo and bar are exhausted. When the shorter iterator(s) are exhausted, izip_longest yields a tuple with None in the position corresponding to that iterator. You can also set a different fillvalue besides None if you wish. See here for the full story. Note also that zip and its zip-like brethen can accept an arbitrary number of iterables as arguments. For example, for num, cheese, color in zip([1,2,3], ['manchego', 'stilton', 'brie'], ['red', 'blue', 'green']): print('{} {} {}'.format(num, color, cheese)) prints 1 red manchego 2 blue stilton 3 green brie
472,000
1,296
1,232,689,043
["python","oop","python-internals","slots"]
Usage of __slots__? What is the purpose of __slots__ in Python — especially with respect to when I would want to use it, and when not?
TLDR The special attribute __slots__ allows you to explicitly state which instance attributes you expect your object instances to have, with the expected results: faster attribute access. space savings in memory. The space savings is from: Storing value references in slots instead of __dict__. Denying __dict__ and __weakref__ creation if parent classes deny them and you declare __slots__. This has the effect of denying the creation of non-slotted attributes on its instances, including within the class body (such as in methods like __init__). Quick Caveats Small caveat, you should only declare a particular slot one time in an inheritance tree. For example: class Base: __slots__ = 'foo', 'bar' class Right(Base): __slots__ = 'baz', class Wrong(Base): __slots__ = 'foo', 'bar', 'baz' # redundant foo and bar Python doesn't object when you get this wrong (it probably should), and problems might not otherwise manifest, but your objects will take up more space than they should. Python 3.8: >>> from sys import getsizeof >>> getsizeof(Right()), getsizeof(Wrong()) (56, 72) This is because Base's slot descriptor has a slot separate from Wrong's. This shouldn't usually come up, but it could: >>> w = Wrong() >>> w.foo = 'foo' >>> Base.foo.__get__(w) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: foo >>> Wrong.foo.__get__(w) 'foo' The biggest caveat is for multiple inheritance - multiple "parent classes with nonempty slots" cannot be combined. To accommodate this restriction, follow best practices: create abstractions with empty __slots__ for every parent class (or for every parent class but one), then inherit from these abstractions instead of their concrete versions in your new concrete class. (The original parent classes should also inherit from their respective abstractions, of course.) See section on multiple inheritance below for an example. Requirements To have attributes named in __slots__ to actually be stored in slots instead of a __dict__, a class must inherit from object (automatic in Python 3, but must be explicit in Python 2). To prevent the creation of a __dict__, you must inherit from object and all classes in the inheritance must declare __slots__ and none of them can have a '__dict__' entry. There are a lot of details if you wish to keep reading. Why use __slots__ Faster attribute access The creator of Python, Guido van Rossum, states that he actually created __slots__ for faster attribute access. It's trivial to demonstrate measurably significant speedup: import timeit class Foo(object): __slots__ = 'foo', class Bar(object): pass slotted = Foo() not_slotted = Bar() def get_set_delete_fn(obj): def get_set_delete(): obj.foo = 'foo' obj.foo del obj.foo return get_set_delete and >>> min(timeit.repeat(get_set_delete_fn(slotted))) 0.2846834529991611 >>> min(timeit.repeat(get_set_delete_fn(not_slotted))) 0.3664822799983085 The slotted access is almost 30% faster in Python 3.5 on Ubuntu. >>> 0.3664822799983085 / 0.2846834529991611 1.2873325658284342 In Python 2 on Windows I have measured it about 15% faster. Memory Savings Another purpose of __slots__ is to reduce the space in memory that each object instance takes up. My own contribution to the documentation clearly states the reasons behind this: The space saved over using __dict__ can be significant. SQLAlchemy attributes a lot of memory savings to __slots__. To verify this, using the Anaconda distribution of Python 2.7 on Ubuntu Linux, with guppy.hpy (aka heapy) and sys.getsizeof, the size of a class instance without __slots__ declared, and nothing else, is 64 bytes. That does not include the __dict__. Thank you Python for lazy evaluation again, the __dict__ is apparently not called into existence until it is referenced, but classes without data are usually useless. When called into existence, the __dict__ attribute is a minimum of 280 bytes additionally. In contrast, a class instance with __slots__ declared to be () (no data) is only 16 bytes, and 56 total bytes with one item in slots, 64 with two. For 64 bit Python, I illustrate the memory consumption in bytes in Python 2.7 and 3.6, for __slots__ and __dict__ (no slots defined) for each point where the dict grows in 3.6 (except for 0, 1, and 2 attributes): Python 2.7 Python 3.6 attrs __slots__ __dict__* __slots__ __dict__* | *(no slots defined) none 16 56 + 272† 16 56 + 112† | †if __dict__ referenced one 48 56 + 272 48 56 + 112 two 56 56 + 272 56 56 + 112 six 88 56 + 1040 88 56 + 152 11 128 56 + 1040 128 56 + 240 22 216 56 + 3344 216 56 + 408 43 384 56 + 3344 384 56 + 752 So, in spite of smaller dicts in Python 3, we see how nicely __slots__ scales for instances to save us memory, and that is a major reason you would want to use __slots__. Just for completeness of my notes, note that there is a one-time cost per slot in the class's namespace of 64 bytes in Python 2, and 72 bytes in Python 3, because slots use data descriptors like properties, called "members". >>> Foo.foo <member 'foo' of 'Foo' objects> >>> type(Foo.foo) <class 'member_descriptor'> >>> getsizeof(Foo.foo) 72 Demonstration To deny the creation of a __dict__, you must subclass object. Everything subclasses object in Python 3, but in Python 2 you had to be explicit: class Base(object): __slots__ = () now: >>> b = Base() >>> b.a = 'a' Traceback (most recent call last): File "<pyshell#38>", line 1, in <module> b.a = 'a' AttributeError: 'Base' object has no attribute 'a' Or subclass another class that defines __slots__ class Child(Base): __slots__ = ('a',) and now: c = Child() c.a = 'a' but: >>> c.b = 'b' Traceback (most recent call last): File "<pyshell#42>", line 1, in <module> c.b = 'b' AttributeError: 'Child' object has no attribute 'b' To allow __dict__ creation while subclassing slotted objects, just add '__dict__' to the __slots__ (note that slots are ordered, and you shouldn't repeat slots that are already in parent classes): class SlottedWithDict(Child): __slots__ = ('__dict__', 'b') swd = SlottedWithDict() swd.a = 'a' swd.b = 'b' swd.c = 'c' and >>> swd.__dict__ {'c': 'c'} Or you don't even need to declare __slots__ in your subclass, and you will still use slots from the parents, but not restrict the creation of a __dict__: class NoSlots(Child): pass ns = NoSlots() ns.a = 'a' ns.b = 'b' and: >>> ns.__dict__ {'b': 'b'} However, __slots__ may cause problems for multiple inheritance: class BaseA(object): __slots__ = ('a',) class BaseB(object): __slots__ = ('b',) ```python Because creating a child class from parents with both non-empty slots fails: ```python >>> class Child(BaseA, BaseB): __slots__ = () Traceback (most recent call last): File "<pyshell#68>", line 1, in <module> class Child(BaseA, BaseB): __slots__ = () TypeError: Error when calling the metaclass bases multiple bases have instance lay-out conflict If you run into this problem, You could just remove __slots__ from the parents, or if you have control of the parents, give them empty slots, or refactor to abstractions: from abc import ABC class AbstractA(ABC): __slots__ = () class BaseA(AbstractA): __slots__ = ('a',) class AbstractB(ABC): __slots__ = () class BaseB(AbstractB): __slots__ = ('b',) class Child(AbstractA, AbstractB): __slots__ = ('a', 'b') c = Child() # no problem! Add '__dict__' to __slots__ to get dynamic assignment class Foo(object): __slots__ = 'bar', 'baz', '__dict__' and now: >>> foo = Foo() >>> foo.boink = 'boink' So with '__dict__' in slots we lose some of the size benefits with the upside of having dynamic assignment and still having slots for the names we do expect. When you inherit from an object that isn't slotted, you get the same sort of semantics when you use __slots__ - names that are in __slots__ point to slotted values, while any other values are put in the instance's __dict__. Avoiding __slots__ because you want to be able to add attributes on the fly is actually not a good reason - just add "__dict__" to your __slots__ if this is required. You can similarly add __weakref__ to __slots__ explicitly if you need that feature. Set to empty tuple when subclassing a namedtuple The namedtuple builtin make immutable instances that are very lightweight (essentially, the size of tuples) but to get the benefits, you need to do it yourself if you subclass them: from collections import namedtuple class MyNT(namedtuple('MyNT', 'bar baz')): """MyNT is an immutable and lightweight object""" __slots__ = () usage: >>> nt = MyNT('bar', 'baz') >>> nt.bar 'bar' >>> nt.baz 'baz' And trying to assign an unexpected attribute raises an AttributeError because we have prevented the creation of __dict__: >>> nt.quux = 'quux' Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'MyNT' object has no attribute 'quux' You can allow __dict__ creation by leaving off __slots__ = (), but you can't use non-empty __slots__ with subtypes of tuple. Biggest Caveat: Multiple inheritance Even when non-empty slots are the same for multiple parents, they cannot be used together: class Foo(object): __slots__ = 'foo', 'bar' class Bar(object): __slots__ = 'foo', 'bar' # alas, would work if empty, i.e. () >>> class Baz(Foo, Bar): pass Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Error when calling the metaclass bases multiple bases have instance lay-out conflict Using an empty __slots__ in the parent seems to provide the most flexibility, allowing the child to choose to prevent or allow (by adding '__dict__' to get dynamic assignment, see section above) the creation of a __dict__: class Foo(object): __slots__ = () class Bar(object): __slots__ = () class Baz(Foo, Bar): __slots__ = ('foo', 'bar') b = Baz() b.foo, b.bar = 'foo', 'bar' You don't have to have slots - so if you add them, and remove them later, it shouldn't cause any problems. Going out on a limb here: If you're composing mixins or using abstract base classes, which aren't intended to be instantiated, an empty __slots__ in those parents seems to be the best way to go in terms of flexibility for subclassers. To demonstrate, first, let's create a class with code we'd like to use under multiple inheritance class AbstractBase: __slots__ = () def __init__(self, a, b): self.a = a self.b = b def __repr__(self): return f'{type(self).__name__}({repr(self.a)}, {repr(self.b)})' We could use the above directly by inheriting and declaring the expected slots: class Foo(AbstractBase): __slots__ = 'a', 'b' But we don't care about that, that's trivial single inheritance, we need another class we might also inherit from, maybe with a noisy attribute: class AbstractBaseC: __slots__ = () @property def c(self): print('getting c!') return self._c @c.setter def c(self, arg): print('setting c!') self._c = arg Now if both bases had nonempty slots, we couldn't do the below. (In fact, if we wanted, we could have given AbstractBase nonempty slots a and b, and left them out of the below declaration - leaving them in would be wrong): class Concretion(AbstractBase, AbstractBaseC): __slots__ = 'a b _c'.split() And now we have functionality from both via multiple inheritance, and can still deny __dict__ and __weakref__ instantiation: >>> c = Concretion('a', 'b') >>> c.c = c setting c! >>> c.c getting c! Concretion('a', 'b') >>> c.d = 'd' Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Concretion' object has no attribute 'd' Other cases to avoid slots Avoid them when you want to perform __class__ assignment with another class that doesn't have them (and you can't add them) unless the slot layouts are identical. (I am very interested in learning who is doing this and why.) Avoid them if you want to subclass variable length builtins like long, tuple, or str, and you want to add attributes to them. Avoid them if you insist on providing default values via class attributes for instance variables. You may be able to tease out further caveats from the rest of the __slots__ documentation. Critiques of other answers The current top answers cite outdated information and are quite hand-wavy and miss the mark in some important ways. Do not "only use __slots__ when instantiating lots of objects" I quote: "You would want to use __slots__ if you are going to instantiate a lot (hundreds, thousands) of objects of the same class." Abstract Base Classes, for example, from the collections module, are not instantiated, yet __slots__ are declared for them. Why? If a user wishes to deny __dict__ or __weakref__ creation, those things must not be available in the parent classes. __slots__ contributes to reusability when creating interfaces or mixins. It is true that many Python users aren't writing for reusability, but when you are, having the option to deny unnecessary space usage is valuable. __slots__ doesn't break pickling When pickling a slotted object, you may find it complains with a misleading TypeError: >>> pickle.loads(pickle.dumps(f)) TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled This is actually incorrect. This message comes from the oldest protocol, which is the default. You can select the latest protocol with the -1 argument. In Python 2.7 this would be 2 (which was introduced in 2.3), and in 3.6 it is 4. >>> pickle.loads(pickle.dumps(f, -1)) <__main__.Foo object at 0x1129C770> in Python 2.7: >>> pickle.loads(pickle.dumps(f, 2)) <__main__.Foo object at 0x1129C770> in Python 3.6 >>> pickle.loads(pickle.dumps(f, 4)) <__main__.Foo object at 0x1129C770> So I would keep this in mind, as it is a solved problem. Critique of the (until Oct 2, 2016) accepted answer The first paragraph is half short explanation, half predictive. Here's the only part that actually answers the question The proper use of __slots__ is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This saves the overhead of one dict for every object that uses slots The second half is wishful thinking, and off the mark: While this is sometimes a useful optimization, it would be completely unnecessary if the Python interpreter was dynamic enough so that it would only require the dict when there actually were additions to the object. Python actually does something similar to this, only creating the __dict__ when it is accessed, but creating lots of objects with no data is fairly ridiculous. The second paragraph oversimplifies and misses actual reasons to avoid __slots__. The below is not a real reason to avoid slots (for actual reasons, see the rest of my answer above.): They change the behavior of the objects that have slots in a way that can be abused by control freaks and static typing weenies. It then goes on to discuss other ways of accomplishing that perverse goal with Python, not discussing anything to do with __slots__. The third paragraph is more wishful thinking. Together it is mostly off-the-mark content that the answerer didn't even author and contributes to ammunition for critics of the site. Memory usage evidence Create some normal objects and slotted objects: >>> class Foo(object): pass >>> class Bar(object): __slots__ = () Instantiate a million of them: >>> foos = [Foo() for f in xrange(1000000)] >>> bars = [Bar() for b in xrange(1000000)] Inspect with guppy.hpy().heap(): >>> guppy.hpy().heap() Partition of a set of 2028259 objects. Total size = 99763360 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000000 49 64000000 64 64000000 64 __main__.Foo 1 169 0 16281480 16 80281480 80 list 2 1000000 49 16000000 16 96281480 97 __main__.Bar 3 12284 1 987472 1 97268952 97 str ... Access the regular objects and their __dict__ and inspect again: >>> for f in foos: ... f.__dict__ >>> guppy.hpy().heap() Partition of a set of 3028258 objects. Total size = 379763480 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000000 33 280000000 74 280000000 74 dict of __main__.Foo 1 1000000 33 64000000 17 344000000 91 __main__.Foo 2 169 0 16281480 4 360281480 95 list 3 1000000 33 16000000 4 376281480 99 __main__.Bar 4 12284 0 987472 0 377268952 99 str ... This is consistent with the history of Python, from Unifying types and classes in Python 2.2. If you subclass a built-in type, extra space is automatically added to the instances to accomodate __dict__ and __weakrefs__. (The __dict__ is not initialized until you use it though, so you shouldn't worry about the space occupied by an empty dictionary for each instance you create.) If you don't need this extra space, you can add the phrase "__slots__ = []" to your class.
403,421
1,290
1,230,741,692
["python","list","sorting","reverse"]
How do I sort a list of objects based on an attribute of the objects? I have a list of Python objects that I want to sort by a specific attribute of each object: [Tag(name="toe", count=10), Tag(name="leg", count=2), ...] How do I sort the list by .count in descending order?
To sort the list in place: orig_list.sort(key=lambda x: x.count, reverse=True) To return a new list, use sorted: new_list = sorted(orig_list, key=lambda x: x.count, reverse=True) Explanation: key=lambda x: x.count sorts by count. reverse=True sorts in descending order. More on sorting by keys.
8,270,092
1,283
1,322,229,081
["python","string","trim","removing-whitespace"]
Remove all whitespace in a string I want to eliminate all the whitespace from a string, on both ends, and in between words. I have this Python code: def my_handle(self): sentence = ' hello apple ' sentence.strip() But that only eliminates the whitespace on both sides of the string. How do I remove all whitespace?
If you want to remove leading and ending whitespace, use str.strip(): >>> " hello apple ".strip() 'hello apple' If you want to remove all space characters, use str.replace() (NB this only removes the “normal” ASCII space character ' ' U+0020 but not any other whitespace): >>> " hello apple ".replace(" ", "") 'helloapple' If you want to remove all whitespace and then leave a single space character between words, use str.split() followed by str.join(): >>> " ".join(" hello apple ".split()) 'hello apple' If you want to remove all whitespace then change the above leading " " to "": >>> "".join(" hello apple ".split()) 'helloapple'
129,507
1,276
1,222,286,435
["python","unit-testing","exception"]
How do you test that a Python function throws an exception? How does one write a unit test that fails only if a function doesn't throw an expected exception?
Use TestCase.assertRaises from the unittest module, for example: import mymod class MyTestCase(unittest.TestCase): def test1(self): self.assertRaises(SomeCoolException, mymod.myfunc)
3,462,143
1,270
1,281,555,490
["python","performance","list","set","set-difference"]
Get difference between two lists with Unique Entries I have two lists in Python: temp1 = ['One', 'Two', 'Three', 'Four'] temp2 = ['One', 'Two'] Assuming the elements in each list are unique, I want to create a third list with items from the first list which are not in the second list: temp3 = ['Three', 'Four'] Are there any fast ways without cycles and checking?
To get elements which are in temp1 but not in temp2 (assuming uniqueness of the elements in each list): In [5]: list(set(temp1) - set(temp2)) Out[5]: ['Four', 'Three'] Beware that it is asymmetric : In [5]: set([1, 2]) - set([2, 3]) Out[5]: set([1]) where you might expect/want it to equal set([1, 3]). If you do want set([1, 3]) as your answer, you can use set([1, 2]).symmetric_difference(set([2, 3])).
12,453,580
1,260
1,347,859,945
["python","string","list","concatenation"]
How to concatenate (join) items in a list to a single string How do I concatenate a list of strings into a single string? For example, given ['this', 'is', 'a', 'sentence'], how do I get "this-is-a-sentence"? For handling a few strings in separate variables, see How do I append one string to another in Python?. For the opposite process - creating a list from a string - see How do I split a string into a list of characters? or How do I split a string into a list of words? as appropriate.
Use str.join: >>> words = ['this', 'is', 'a', 'sentence'] >>> '-'.join(words) 'this-is-a-sentence' >>> ' '.join(words) 'this is a sentence'
4,028,904
1,257
1,288,136,599
["python","cross-platform","home-directory"]
What is a cross-platform way to get the home directory? I need to get the location of the home directory of the current logged-on user. Currently, I've been using the following on Linux: os.getenv("HOME") However, this does not work on Windows. What is the correct cross-platform way to do this ?
On Python 3.5+ you can use pathlib.Path.home(): from pathlib import Path home = Path.home() # example usage: with open(home / ".ssh" / "known_hosts") as f: lines = f.readlines() to get a pathlib.PosixPath object. Use str() to convert to a string if necessary. On older Python versions, you can use os.path.expanduser. from os.path import expanduser home = expanduser("~")
1,185,524
1,254
1,248,641,678
["python","string","whitespace","trim","strip"]
How do I trim whitespace? Is there a Python function that will trim whitespace (spaces and tabs) from a string? So that given input " \t example string\t " becomes "example string".
For whitespace on both sides, use str.strip: s = " \t a string example\t " s = s.strip() For whitespace on the right side, use str.rstrip: s = s.rstrip() For whitespace on the left side, use str.lstrip: s = s.lstrip() You can provide an argument to strip arbitrary characters to any of these functions, like this: s = s.strip(' \t\n\r') This will strip any space, \t, \n, or \r characters from both sides of the string. The examples above only remove strings from the left-hand and right-hand sides of strings. If you want to also remove characters from the middle of a string, try re.sub: import re print(re.sub('[\s+]', '', s)) That should print out: astringexample
2,793,324
1,216
1,273,304,908
["python","list"]
Is there a simple way to delete a list element by value? I want to remove a value from a list if it exists in the list (which it may not). a = [1, 2, 3, 4] b = a.index(6) del a[b] print(a) The above gives the error: ValueError: list.index(x): x not in list So I have to do this: a = [1, 2, 3, 4] try: b = a.index(6) del a[b] except: pass print(a) But is there not a simpler way to do this?
To remove the first occurrence of an element, use list.remove: >>> xs = ['a', 'b', 'c', 'd'] >>> xs.remove('b') >>> print(xs) ['a', 'c', 'd'] To remove all occurrences of an element, use a list comprehension: >>> xs = ['a', 'b', 'c', 'd', 'b', 'b', 'b', 'b'] >>> xs = [x for x in xs if x != 'b'] >>> print(xs) ['a', 'c', 'd']
20,638,006
1,215
1,387,293,891
["python","dictionary","pandas","dataframe"]
Convert list of dictionaries to a pandas DataFrame How can I convert a list of dictionaries into a DataFrame? I want to turn [{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': "february"}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] into month points points_h1 time year 0 NaN 50 NaN 5:00 2010 1 february 25 NaN 6:00 NaN 2 january 90 NaN 9:00 NaN 3 june NaN 20 NaN NaN
If ds is a list of dicts: df = pd.DataFrame(ds) Note: this does not work with nested data.
16,923,281
1,166
1,370,364,416
["python","pandas","dataframe","csv","file-io"]
Writing a pandas DataFrame to CSV file I have a dataframe in pandas which I would like to write to a CSV file. I am doing this using: df.to_csv('out.csv') And getting the following error: UnicodeEncodeError: 'ascii' codec can't encode character u'\u03b1' in position 20: ordinal not in range(128) Is there any way to get around this easily (i.e. I have unicode characters in my data frame)? And is there a way to write to a tab delimited file instead of a CSV using e.g. a 'to-tab' method (that I don't think exists)?
To delimit by a tab you can use the sep argument of to_csv: df.to_csv(file_name, sep='\t') To use a specific encoding (e.g. 'utf-8') use the encoding argument: df.to_csv(file_name, sep='\t', encoding='utf-8') In many cases you will want to remove the index and add a header: df.to_csv(file_name, sep='\t', encoding='utf-8', index=False, header=True)
626,759
1,164
1,236,613,285
["python","list","tuples"]
What's the difference between lists and tuples? What are the differences between lists and tuples, and what are their respective advantages and disadvantages?
Apart from tuples being immutable there is also a semantic distinction that should guide their usage. Tuples are heterogeneous data structures (i.e., their entries have different meanings), while lists are homogeneous sequences. Tuples have structure, lists have order. Using this distinction makes code more explicit and understandable. One example would be pairs of page and line number to reference locations in a book, e.g.: my_location = (42, 11) # page number, line number You can then use this as a key in a dictionary to store notes on locations. A list on the other hand could be used to store multiple locations. Naturally one might want to add or remove locations from the list, so it makes sense that lists are mutable. On the other hand it doesn't make sense to add or remove items from an existing location - hence tuples are immutable. There might be situations where you want to change items within an existing location tuple, for example when iterating through the lines of a page. But tuple immutability forces you to create a new location tuple for each new value. This seems inconvenient on the face of it, but using immutable data like this is a cornerstone of value types and functional programming techniques, which can have substantial advantages. There are some interesting articles on this issue, e.g. "Python Tuples are Not Just Constant Lists" or "Understanding tuples vs. lists in Python". The official Python documentation also mentions this "Tuples are immutable, and usually contain an heterogeneous sequence ...". In a statically typed language like Haskell the values in a tuple generally have different types and the length of the tuple must be fixed. In a list the values all have the same type and the length is not fixed. So the difference is very obvious. Finally there is the namedtuple in Python, which makes sense because a tuple is already supposed to have structure. This underlines the idea that tuples are a light-weight alternative to classes and instances.
1,773,805
1,161
1,258,758,976
["python","yaml"]
How can I parse a YAML file in Python How can I parse a YAML file in Python?
The easiest method without relying on C headers is PyYaml (documentation), which can be installed via pip install pyyaml: import yaml with open("example.yaml") as stream: try: print(yaml.safe_load(stream)) except yaml.YAMLError as exc: print(exc) yaml.load() also exists, but yaml.safe_load() should always be preferred to avoid introducing the possibility for arbitrary code execution. So unless you explicitly need the arbitrary object serialization/deserialization use safe_load. The PyYaml project supports versions up through the YAML 1.1 specification. If YAML 1.2 specification support is needed, see ruamel.yaml as noted in this answer. Also, you could also use a drop in replacement for pyyaml, that keeps your yaml file ordered the same way you had it, called oyaml. View snyk of oyaml here
9,233,027
1,152
1,328,899,437
["python","python-3.x","unicode","file-io","decode"]
UnicodeDecodeError: 'charmap' codec can't decode byte X in position Y: character maps to <undefined> I'm trying to get a Python 3 program to do some manipulations with a text file filled with information. However, when trying to read the file I get the following error: Traceback (most recent call last): File "SCRIPT LOCATION", line NUMBER, in <module> text = file.read() File "C:\Python31\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to `<undefined>` Editor's note: After reading this Q&A, see How to determine the encoding of text if you need help figuring out the encoding of the file you are trying to open.
The file in question is not using the CP1252 encoding. It's using another encoding. Which one you have to figure out yourself. Common ones are Latin-1 and UTF-8. Since 0x90 doesn't actually mean anything in Latin-1, UTF-8 (where 0x90 is a continuation byte) is more likely. You specify the encoding when you open the file: file = open(filename, encoding="utf-8")
20,180,543
1,142
1,385,324,701
["python"]
How do I check the versions of Python modules? I installed the Python modules construct and statlib using setuptools: sudo apt-get install python-setuptools sudo easy_install statlib sudo easy_install construct How do I check their versions from the command line?
Use pip instead of easy_install. With pip, list all installed packages and their versions via: pip freeze On most Linux systems, you can pipe this to grep (or findstr on Windows) to find the row for the particular package you're interested in. Linux: pip freeze | grep lxml lxml==2.3 Windows: pip freeze | findstr lxml lxml==2.3 For an individual module, you can try the __version__ attribute. However, there are modules without it: python -c "import requests; print(requests.__version__)" 2.14.2 python -c "import lxml; print(lxml.__version__)" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: 'module' object has no attribute '__version__' Lastly, as the commands in your question are prefixed with sudo, it appears you're installing to the global python environment. I strongly advise to take look into Python virtual environment managers, for example virtualenvwrapper.
9,733,638
1,128
1,331,884,009
["python","json","python-requests","cherrypy"]
How can I POST JSON data with Python's Requests library? I need to POST JSON content from a client to a server. I'm using Python 2.7.1 and simplejson. The client is using Requests. The server is CherryPy. I can GET hard-coded JSON content from the server (the code is not shown), but when I try to POST JSON content to the server, I get "400 Bad Request". Here is my client code: data = {'sender': 'Alice', 'receiver': 'Bob', 'message': 'We did it!'} data_json = simplejson.dumps(data) payload = {'json_payload': data_json} r = requests.post("http://localhost:8080", data=payload) Here is the server code. class Root(object): def __init__(self, content): self.content = content print self.content # This works exposed = True def GET(self): cherrypy.response.headers['Content-Type'] = 'application/json' return simplejson.dumps(self.content) def POST(self): self.content = simplejson.loads(cherrypy.request.body.read()) How can I fix this problem?
Starting with Requests version 2.4.2, you can use the json= parameter (which takes a dictionary) instead of data= (which takes a string) in the call: >>> import requests >>> r = requests.post('http://httpbin.org/post', json={"key": "value"}) >>> r.status_code 200 >>> r.json() {'args': {}, 'data': '{"key": "value"}', 'files': {}, 'form': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Content-Length': '16', 'Content-Type': 'application/json', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.4.3 CPython/3.4.0', 'X-Request-Id': 'xx-xx-xx'}, 'json': {'key': 'value'}, 'origin': 'x.x.x.x', 'url': 'http://httpbin.org/post'}
32,490,629
1,072
1,441,840,825
["python","date","datetime","formatting","python-datetime"]
Getting today's date in YYYY-MM-DD in Python? Is there a nicer way than the following to return today's date in the YYYY-MM-DD format? str(datetime.datetime.today()).split()[0]
Use strftime: >>> from datetime import datetime >>> datetime.today().strftime('%Y-%m-%d') '2021-01-26' To also include a zero-padded Hour:Minute:Second at the end: >>> datetime.today().strftime('%Y-%m-%d %H:%M:%S') '2021-01-26 16:50:03' To get the UTC date and time: >>> datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') '2021-01-27 00:50:03'
308,999
1,058
1,227,279,220
["python","decorator","functools"]
What does functools.wraps do? In a comment on this answer to another question, someone said that they weren't sure what functools.wraps was doing. So, I'm asking this question so that there will be a record of it on StackOverflow for future reference: what does functools.wraps do, exactly?
When you use a decorator, you're replacing one function with another. In other words, if you have a decorator def logged(func): def with_logging(*args, **kwargs): print(func.__name__ + " was called") return func(*args, **kwargs) return with_logging then when you say @logged def f(x): """does some math""" return x + x * x it's exactly the same as saying def f(x): """does some math""" return x + x * x f = logged(f) and your function f is replaced with the function with_logging. Unfortunately, this means that if you then say print(f.__name__) it will print with_logging because that's the name of your new function. In fact, if you look at the docstring for f, it will be blank because with_logging has no docstring, and so the docstring you wrote won't be there anymore. Also, if you look at the pydoc result for that function, it won't be listed as taking one argument x; instead it'll be listed as taking *args and **kwargs because that's what with_logging takes. If using a decorator always meant losing this information about a function, it would be a serious problem. That's why we have functools.wraps. This takes a function used in a decorator and adds the functionality of copying over the function name, docstring, arguments list, etc. And since wraps is itself a decorator, the following code does the correct thing: from functools import wraps def logged(func): @wraps(func) def with_logging(*args, **kwargs): print(func.__name__ + " was called") return func(*args, **kwargs) return with_logging @logged def f(x): """does some math""" return x + x * x print(f.__name__) # prints 'f' print(f.__doc__) # prints 'does some math'
2,186,525
1,051
1,265,134,790
["python","path","filesystems","glob","fnmatch"]
How to use glob() to find files recursively? I would like to list all files recursively in a directory. I currently have a directory structure like this: src/main.c src/dir/file1.c src/another-dir/file2.c src/another-dir/nested/files/file3.c I've tried to do the following: from glob import glob glob(os.path.join('src','*.c')) But this will only get be files directly in the src subfolder, e.g. I get main.c but I will not get file1.c, file2.c etc. from glob import glob glob(os.path.join('src','*.c')) glob(os.path.join('src','*','*.c')) glob(os.path.join('src','*','*','*.c')) glob(os.path.join('src','*','*','*','*.c')) But this is obviously limited and clunky, how can I do this properly?
There are a couple of ways: pathlib.Path().rglob() Use pathlib.Path().rglob() from the pathlib module, which was introduced in Python 3.5. from pathlib import Path for path in Path('src').rglob('*.c'): print(path.name) glob.glob() If you don't want to use pathlib, use glob.glob(): from glob import glob for filename in glob('src/**/*.c', recursive=True): print(filename) For cases where matching files beginning with a dot (.); like files in the current directory or hidden files on Unix based system, use the os.walk() solution below. os.walk() For older Python versions, use os.walk() to recursively walk a directory and fnmatch.filter() to match against a simple expression: import fnmatch import os matches = [] for root, dirnames, filenames in os.walk('src'): for filename in fnmatch.filter(filenames, '*.c'): matches.append(os.path.join(root, filename)) This version should also be faster depending on how many files you have, as the pathlib module has a bit of overhead over os.walk().
1,937,622
1,050
1,261,355,161
["python","datetime","date"]
Convert date to datetime in Python Is there a built-in method for converting a date to a datetime in Python, for example getting the datetime for the midnight of the given date? The opposite conversion is easy: datetime has a .date() method. Do I really have to manually call datetime(d.year, d.month, d.day)?
You can use datetime.combine(date, time); for the time, you create a datetime.time object initialized to midnight. from datetime import date from datetime import datetime dt = datetime.combine(date.today(), datetime.min.time())
11,707,586
1,046
1,343,547,891
["python","pandas","printing","column-width"]
How do I expand the output display to see more columns of a Pandas DataFrame? Is there a way to widen the display of output in either interactive or script-execution mode? Specifically, I am using the describe() function on a Pandas DataFrame. When the DataFrame is five columns (labels) wide, I get the descriptive statistics that I want. However, if the DataFrame has any more columns, the statistics are suppressed and something like this is returned: >> Index: 8 entries, count to max >> Data columns: >> x1 8 non-null values >> x2 8 non-null values >> x3 8 non-null values >> x4 8 non-null values >> x5 8 non-null values >> x6 8 non-null values >> x7 8 non-null values The "8" value is given whether there are 6 or 7 columns. What does the "8" refer to? I have already tried dragging the IDLE window larger, as well as increasing the "Configure IDLE" width options, to no avail.
(For Pandas versions before 0.23.4, see at bottom.) Use pandas.set_option(optname, val), or equivalently pd.options.<opt.hierarchical.name> = val. Like: import pandas as pd pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) Pandas will try to autodetect the size of your terminal window if you set pd.options.display.width = 0. Here is the help for set_option: set_option(pat,value) - Sets the value of the specified option Available options: display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format, height, line_width, max_columns, max_colwidth, max_info_columns, max_info_rows, max_rows, max_seq_items, mpl_style, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, width] mode.[sim_interactive, use_inf_as_null] Parameters ---------- pat - str/regexp which should match a single option. Note: partial matches are supported for convenience, but unless you use the full option name (e.g., *x.y.z.option_name*), your code may break in future versions if new options with similar names are introduced. value - new value of option. Returns ------- None Raises ------ KeyError if no such option exists display.chop_threshold: [default: None] [currently: None] : float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. display.colheader_justify: [default: right] [currently: right] : 'left'/'right' Controls the justification of column headers. used by DataFrameFormatter. display.column_space: [default: 12] [currently: 12]No description available. display.date_dayfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the day first, eg 20/01/2005 display.date_yearfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the year first, e.g., 2005/01/20 display.encoding: [default: UTF-8] [currently: UTF-8] : str/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. display.expand_frame_repr: [default: True] [currently: True] : boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, `max_columns` is still respected, but the output will wrap-around across multiple "pages" if it's width exceeds `display.width`. display.float_format: [default: None] [currently: None] : callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See core.format.EngFormatter for an example. display.height: [default: 60] [currently: 1000] : int Deprecated. (Deprecated, use `display.height` instead.) display.line_width: [default: 80] [currently: 1000] : int Deprecated. (Deprecated, use `display.width` instead.) display.max_columns: [default: 20] [currently: 500] : int max_rows and max_columns are used in __repr__() methods to decide if to_string() or info() is used to render an object to a string. In case python/IPython is running in a terminal this can be set to 0 and Pandas will correctly auto-detect the width the terminal and swap to a smaller format in case all columns would not fit vertically. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. 'None' value means unlimited. display.max_colwidth: [default: 50] [currently: 50] : int The maximum width in characters of a column in the repr of a Pandas data structure. When the column overflows, a "..." placeholder is embedded in the output. display.max_info_columns: [default: 100] [currently: 100] : int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. display.max_info_rows: [default: 1690785] [currently: 1690785] : int or None max_info_rows is the maximum number of rows for which a frame will perform a null check on its columns when repr'ing To a console. The default is 1,000,000 rows. So, if a DataFrame has more 1,000,000 rows there will be no null check performed on the columns and thus the representation will take much less time to display in an interactive session. A value of None means always perform a null check when repr'ing. display.max_rows: [default: 60] [currently: 500] : int This sets the maximum number of rows Pandas should output when printing out various output. For example, this value determines whether the repr() for a dataframe prints out fully or just a summary repr. 'None' value means unlimited. display.max_seq_items: [default: None] [currently: None] : int or None when pretty-printing a long sequence, no more then `max_seq_items` will be printed. If items are ommitted, they will be denoted by the addition of "..." to the resulting string. If set to None, the number of items to be printed is unlimited. display.mpl_style: [default: None] [currently: None] : bool Setting this to 'default' will modify the rcParams used by matplotlib to give plots a more pleasing visual style by default. Setting this to None/False restores the values to their initial value. display.multi_sparse: [default: True] [currently: True] : boolean "sparsify" MultiIndex display (don't display repeated elements in outer levels within groups) display.notebook_repr_html: [default: True] [currently: True] : boolean When True, IPython notebook will use html representation for Pandas objects (if it is available). display.pprint_nest_depth: [default: 3] [currently: 3] : int Controls the number of nested levels to process when pretty-printing display.precision: [default: 7] [currently: 7] : int Floating point output precision (number of significant digits). This is only a suggestion display.width: [default: 80] [currently: 1000] : int Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and Pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. mode.sim_interactive: [default: False] [currently: False] : boolean Whether to simulate interactive mode for purposes of testing mode.use_inf_as_null: [default: False] [currently: False] : boolean True means treat None, NaN, INF, -INF as null (old way), False means None and NaN are null, but INF, -INF are not null (new way). Call def: pd.set_option(self, *args, **kwds) Older version information Much of this has been deprecated. As @bmu mentioned, Pandas auto detects (by default) the size of the display area, a summary view will be used when an object repr does not fit on the display. You mentioned resizing the IDLE window, to no effect. If you do print df.describe().to_string() does it fit on the IDLE window? The terminal size is determined by pandas.util.terminal.get_terminal_size() (deprecated and removed), this returns a tuple containing the (width, height) of the display. Does the output match the size of your IDLE window? There might be an issue (there was one before when running a terminal in Emacs). Note that it is possible to bypass the autodetect, pandas.set_printoptions(max_rows=200, max_columns=10) will never switch to summary view if number of rows, columns does not exceed the given limits. The max_colwidth option helps in seeing untruncated form of each column.
3,964,681
1,041
1,287,450,553
["python","file-io"]
Find all files in a directory with extension .txt in Python How can I find all the files in a directory having the extension .txt in python?
You can use glob: import glob, os os.chdir("/mydir") for file in glob.glob("*.txt"): print(file) or simply os.listdir: import os for file in os.listdir("/mydir"): if file.endswith(".txt"): print(os.path.join("/mydir", file)) or if you want to traverse directory, use os.walk: import os for root, dirs, files in os.walk("/mydir"): for file in files: if file.endswith(".txt"): print(os.path.join(root, file))
1,896,918
1,036
1,260,720,623
["python","unit-testing"]
Running unittest with typical test directory structure The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."
The best solution in my opinion is to use the unittest command line interface which will add the directory to the sys.path so you don't have to (done in the TestLoader class). For example for a directory structure like this: new_project ├── antigravity.py └── test_antigravity.py You can just run: $ cd new_project $ python -m unittest test_antigravity For a directory structure like yours: new_project ├── antigravity │ ├── __init__.py # make it a package │ └── antigravity.py └── test ├── __init__.py # also make test a package └── test_antigravity.py And in the test modules inside the test package, you can import the antigravity package and its modules as usual: # import the package import antigravity # import the antigravity module from antigravity import antigravity # or an object inside the antigravity module from antigravity.antigravity import my_object Running a single test module: To run a single test module, in this case test_antigravity.py: $ cd new_project $ python -m unittest test.test_antigravity Just reference the test module the same way you import it. Running a single test case or test method: Also you can run a single TestCase or a single test method: $ python -m unittest test.test_antigravity.GravityTestCase $ python -m unittest test.test_antigravity.GravityTestCase.test_method Running all tests: You can also use test discovery which will discover and run all the tests for you, they must be modules or packages named test*.py (can be changed with the -p, --pattern flag): $ cd new_project $ python -m unittest discover $ # Also works without discover for Python 3 $ # as suggested by @Burrito in the comments $ python -m unittest This will run all the test*.py modules inside the test package. Here you can find the updated official documentation of discovery.
432,842
1,033
1,231,677,283
["python","logical-operators"]
How do you get the logical xor of two variables in Python? How do you get the logical xor of two variables in Python? For example, I have two variables that I expect to be strings. I want to test that only one of them contains a True value (is not None or an empty string): str1 = raw_input("Enter string one:") str2 = raw_input("Enter string two:") if logical_xor(str1, str2): print "ok" else: print "bad" The ^ operator is bitwise, and not defined on all objects: >>> 1 ^ 1 0 >>> 2 ^ 1 3 >>> "abc" ^ "" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for ^: 'str' and 'str'
If you're already normalizing the inputs to booleans, then != is xor. bool(a) != bool(b)
323,972
1,030
1,227,797,753
["python","multithreading","python-multithreading","kill","terminate"]
Is there any way to kill a Thread? Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases: the thread is holding a critical resource that must be closed properly the thread has created several other threads that must be killed as well. The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit. For example: import threading class StoppableThread(threading.Thread): """Thread class with a stop() method. The thread itself has to check regularly for the stopped() condition.""" def __init__(self, *args, **kwargs): super(StoppableThread, self).__init__(*args, **kwargs) self._stop_event = threading.Event() def stop(self): self._stop_event.set() def stopped(self): return self._stop_event.is_set() In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals. There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it. The following code allows (with some restrictions) to raise an Exception in a Python thread: def _async_raise(tid, exctype): '''Raises an exception in the threads with id tid''' if not inspect.isclass(exctype): raise TypeError("Only types can be raised (not instances)") res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), ctypes.py_object(exctype)) if res == 0: raise ValueError("invalid thread id") elif res != 1: # "if it returns a number greater than one, you're in trouble, # and you should call it again with exc=NULL to revert the effect" ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None) raise SystemError("PyThreadState_SetAsyncExc failed") class ThreadWithExc(threading.Thread): '''A thread class that supports raising an exception in the thread from another thread. ''' def _get_my_tid(self): """determines this (self's) thread id CAREFUL: this function is executed in the context of the caller thread, to get the identity of the thread represented by this instance. """ if not self.is_alive(): # Note: self.isAlive() on older version of Python raise threading.ThreadError("the thread is not active") # do we have it cached? if hasattr(self, "_thread_id"): return self._thread_id # no, look for it in the _active dict for tid, tobj in threading._active.items(): if tobj is self: self._thread_id = tid return tid # TODO: in python 2.6, there's a simpler way to do: self.ident raise AssertionError("could not determine the thread's id") def raise_exc(self, exctype): """Raises the given exception type in the context of this thread. If the thread is busy in a system call (time.sleep(), socket.accept(), ...), the exception is simply ignored. If you are sure that your exception should terminate the thread, one way to ensure that it works is: t = ThreadWithExc( ... ) ... t.raise_exc( SomeException ) while t.isAlive(): time.sleep( 0.1 ) t.raise_exc( SomeException ) If the exception is to be caught by the thread, you need a way to check that your thread has caught it. CAREFUL: this function is executed in the context of the caller thread, to raise an exception in the context of the thread represented by this instance. """ _async_raise( self._get_my_tid(), exctype ) (Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.) As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption. A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
477,486
1,028
1,232,878,843
["python","floating-point","range"]
How do I use a decimal step value for range()? How do I iterate between 0 and 1 by a step of 0.1? This says that the step argument cannot be zero: for i in range(0, 1, 0.1): print(i)
Rather than using a decimal step directly, it's much safer to express this in terms of how many points you want. Otherwise, floating-point rounding error is likely to give you a wrong result. Use the linspace function from the NumPy library (which isn't part of the standard library but is relatively easy to obtain). linspace takes a number of points to return, and also lets you specify whether or not to include the right endpoint: >>> np.linspace(0,1,11) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) >>> np.linspace(0,1,10,endpoint=False) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) If you really want to use a floating-point step value, use numpy.arange: >>> import numpy as np >>> np.arange(0.0, 1.0, 0.1) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) Floating-point rounding error will cause problems, though. Here's a simple case where rounding error causes arange to produce a length-4 array when it should only produce 3 numbers: >>> numpy.arange(1, 1.3, 0.1) array([1. , 1.1, 1.2, 1.3])
209,513
1,016
1,224,178,083
["python","string","hex"]
Convert hex string to integer in Python How do I convert a hex string to an integer? "0xffff" ⟶ 65535 "ffff" ⟶ 65535
Without the 0x prefix, you need to specify the base explicitly, otherwise there's no way to tell: x = int("deadbeef", 16) With the 0x prefix, Python can distinguish hex and decimal automatically: >>> print(int("0xdeadbeef", 0)) 3735928559 >>> print(int("10", 0)) 10 (You must specify 0 as the base in order to invoke this prefix-guessing behavior; if you omit the second parameter, int() will assume base-10.)
899,103
1,006
1,243,014,738
["python","file","list","file-io","newline"]
Writing a list to a file with Python, with newlines How do I write a list to a file? writelines() doesn't insert newline characters, so I need to do: f.writelines([f"{line}\n" for line in lines])
Use a loop: with open('your_file.txt', 'w') as f: for line in lines: f.write(f"{line}\n") For Python <3.6: with open('your_file.txt', 'w') as f: for line in lines: f.write("%s\n" % line) For Python 2, one may also use: with open('your_file.txt', 'w') as f: for line in lines: print >> f, line If you're keen on a single function call, at least remove the square brackets [], so that the strings to be printed get made one at a time (a genexp rather than a listcomp) -- no reason to take up all the memory required to materialize the whole list of strings.
1,854
1,003
1,217,906,598
["python","operating-system","cross-platform","platform-agnostic"]
How to identify which OS Python is running on What do I need to look at to see whether I'm on Windows or Unix, etc.?
>>> import os >>> os.name 'posix' >>> import platform >>> platform.system() 'Linux' >>> platform.release() '2.6.22-15-generic' The output of platform.system() is as follows: Linux: Linux Mac: Darwin Windows: Windows See: platform — Access to underlying platform’s identifying data
15,221,473
1,001
1,362,479,358
["python","upgrade","virtualenv","pip","package-managers"]
How do I update/upgrade pip itself from inside my virtual environment? I'm able to update pip-managed packages, but how do I update pip itself? According to pip --version, I currently have pip 1.1 installed in my virtualenv and I want to update to the latest version. What's the command for that? Do I need to use distribute or is there a native pip or virtualenv command? I've already tried pip update and pip update pip with no success.
pip is just a PyPI package like any other; you could use it to upgrade itself the same way you would upgrade any package: pip install --upgrade pip On Windows the recommended command is: py -m pip install --upgrade pip
1,247,486
996
1,249,688,611
["python","list-comprehension","map-function"]
List comprehension vs map Is there a reason to prefer using map() over list comprehension or vice versa? Is either of them generally more efficient or considered generally more Pythonic than the other?
map may be microscopically faster in some cases (when you're not making a lambda for the purpose, but using the same function in map and a list comprehension). List comprehensions may be faster in other cases and most (not all) Pythonistas consider them more direct and clearer. An example of the tiny speed advantage of map when using exactly the same function: $ python -m timeit -s'xs=range(10)' 'map(hex, xs)' 100000 loops, best of 3: 4.86 usec per loop $ python -m timeit -s'xs=range(10)' '[hex(x) for x in xs]' 100000 loops, best of 3: 5.58 usec per loop An example of how performance comparison gets completely reversed when map needs a lambda: $ python -m timeit -s'xs=range(10)' 'map(lambda x: x+2, xs)' 100000 loops, best of 3: 4.24 usec per loop $ python -m timeit -s'xs=range(10)' '[x+2 for x in xs]' 100000 loops, best of 3: 2.32 usec per loop
4,617,034
985
1,294,330,603
["python","file-io"]
How can I open multiple files using "with open" in Python? I want to change a couple of files at one time, iff I can write to all of them. I'm wondering if I somehow can combine the multiple open calls with the with statement: try: with open('a', 'w') as a and open('b', 'w') as b: do_something() except IOError as e: print 'Operation failed: %s' % e.strerror If that's not possible, what would an elegant solution to this problem look like?
As of Python 2.7 (or 3.1 respectively) you can write with open('a', 'w') as a, open('b', 'w') as b: do_something() (Historical note: In earlier versions of Python, you can sometimes use contextlib.nested() to nest context managers. This won't work as expected for opening multiples files, though -- see the linked documentation for details.) In the rare case that you want to open a variable number of files all at the same time, you can use contextlib.ExitStack, starting from Python version 3.3: with ExitStack() as stack: files = [stack.enter_context(open(fname)) for fname in filenames] # Do something with "files" Note that more commonly you want to process files sequentially rather than opening all of them at the same time, in particular if you have a variable number of files: for fname in filenames: with open(fname) as f: # Process f
6,392,739
983
1,308,352,770
["python","syntax"]
What does the "at" (@) symbol do in Python? What does the @ symbol do in Python?
An @ symbol at the beginning of a line is used for class and function decorators: PEP 318: Decorators Python Decorators - Python Wiki The most common Python decorators are: @property @classmethod @staticmethod An @ in the middle of a line is probably matrix multiplication: @ as a binary operator.
682,504
979
1,238,000,421
["python","class","constructor"]
What is a clean "pythonic" way to implement multiple constructors? I can't find a definitive answer for this. As far as I know, you can't have multiple __init__ functions in a Python class. So how do I solve this problem? Suppose I have a class called Cheese with the number_of_holes property. How can I have two ways of creating cheese objects... One that takes a number of holes like this: parmesan = Cheese(num_holes=15). And one that takes no arguments and just randomizes the number_of_holes property: gouda = Cheese(). I can think of only one way to do this, but this seems clunky: class Cheese: def __init__(self, num_holes=0): if num_holes == 0: # Randomize number_of_holes else: number_of_holes = num_holes What do you say? Is there another way?
Actually None is much better for "magic" values: class Cheese: def __init__(self, num_holes=None): if num_holes is None: ... Now if you want complete freedom of adding more parameters: class Cheese: def __init__(self, *args, **kwargs): # args -- tuple of anonymous arguments # kwargs -- dictionary of named arguments self.num_holes = kwargs.get('num_holes', random_holes()) To better explain the concept of *args and **kwargs (you can actually change these names): def f(*args, **kwargs): print('args:', args, 'kwargs:', kwargs) >>> f('a') args: ('a',) kwargs: {} >>> f(ar='a') args: () kwargs: {'ar': 'a'} >>> f(1,2,param=3) args: (1, 2) kwargs: {'param': 3} http://docs.python.org/reference/expressions.html#calls
9,542,738
972
1,330,740,214
["python","list","find"]
Find a value in a list I use the following to check if item is in my_list: if item in my_list: print("Desired item is in list") Is "if item in my_list:" the most "pythonic" way of finding an item in a list? EDIT FOR REOPENING: the question has been considered duplicate, but I'm not entirely convinced: here this question is roughly "what is the most Pythonic way to find an element in a list". And the first answer to the question is really extensive in all Python ways to do this. Whereas on the linked duplicate question and its corresponding answer, the focus is roughly only limited to the 'in' key word in Python. I think it is really limiting, compared to the current question. And I think the answer to this current question, is more relevant and elaborated that the answer of the proposed duplicate question/answer.
As for your first question: "if item is in my_list:" is perfectly fine and should work if item equals one of the elements inside my_list. The item must exactly match an item in the list. For instance, "abc" and "ABC" do not match. Floating point values in particular may suffer from inaccuracy. For instance, 1 - 1/3 != 2/3. As for your second question: There's actually several possible ways if "finding" things in lists. Checking if something is inside This is the use case you describe: Checking whether something is inside a list or not. As you know, you can use the in operator for that: 3 in [1, 2, 3] # => True Filtering a collection That is, finding all elements in a sequence that meet a certain condition. You can use list comprehension or generator expressions for that: matches = [x for x in lst if fulfills_some_condition(x)] matches = (x for x in lst if x > 6) The latter will return a generator which you can imagine as a sort of lazy list that will only be built as soon as you iterate through it. By the way, the first one is exactly equivalent to matches = filter(fulfills_some_condition, lst) in Python 2. Here you can see higher-order functions at work. In Python 3, filter doesn't return a list, but a generator-like object. Finding the first occurrence If you only want the first thing that matches a condition (but you don't know what it is yet), it's fine to use a for loop (possibly using the else clause as well, which is not really well-known). You can also use next(x for x in lst if ...) which will return the first match or raise a StopIteration if none is found. Alternatively, you can use next((x for x in lst if ...), [default value]) Finding the location of an item For lists, there's also the index method that can sometimes be useful if you want to know where a certain element is in the list: [1,2,3].index(2) # => 1 [1,2,3].index(4) # => ValueError However, note that if you have duplicates, .index always returns the lowest index:...... [1,2,3,2].index(2) # => 1 If there are duplicates and you want all the indexes then you can use enumerate() instead: [i for i,x in enumerate([1,2,3,2]) if x==2] # => [1, 3]
4,690,600
970
1,295,004,830
["python","exception","logging","except","python-logging"]
python exception message capturing import ftplib import urllib2 import os import logging logger = logging.getLogger('ftpuploader') hdlr = logging.FileHandler('ftplog.log') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') hdlr.setFormatter(formatter) logger.addHandler(hdlr) logger.setLevel(logging.INFO) FTPADDR = "some ftp address" def upload_to_ftp(con, filepath): try: f = open(filepath,'rb') # file to send con.storbinary('STOR '+ filepath, f) # Send the file f.close() # Close file and FTP logger.info('File successfully uploaded to '+ FTPADDR) except, e: logger.error('Failed to upload to ftp: '+ str(e)) This doesn't seem to work, I get syntax error, what is the proper way of doing this for logging all kind of exceptions to a file
You have to define which type of exception you want to catch. So write except Exception as e: instead of except, e: for a general exception. Other possibility is to write your whole try/except code this way: try: with open(filepath,'rb') as f: con.storbinary('STOR '+ filepath, f) logger.info('File successfully uploaded to '+ FTPADDR) except Exception as e: # works on python 3.x logger.error('Failed to upload to ftp: %s', repr(e)) In older versions of Python 2.x, use except Exception, e instead of except Exception as e: try: with open(filepath,'rb') as f: con.storbinary('STOR '+ filepath, f) logger.info('File successfully uploaded to %s', FTPADDR) except Exception, e: # works on python 2.x logger.error('Failed to upload to ftp: %s', repr(e))
10,377,998
966
1,335,754,681
["python","iterator","directory"]
How can I iterate over files in a given directory? I need to iterate through all .asm files inside a given directory and do some actions on them. How can this be done in a efficient way?
Python 3.6 version of the above answer, using os - assuming that you have the directory path as a str object in a variable called directory_in_str: import os directory = os.fsencode(directory_in_str) for file in os.listdir(directory): filename = os.fsdecode(file) if filename.endswith(".asm") or filename.endswith(".py"): # print(os.path.join(directory, filename)) continue else: continue Or recursively, using pathlib: from pathlib import Path pathlist = Path(directory_in_str).glob('**/*.asm') for path in pathlist: # because path is object not string path_in_str = str(path) # print(path_in_str) Use rglob to replace glob('**/*.asm') with rglob('*.asm') This is like calling Path.glob() with '**/' added in front of the given relative pattern: from pathlib import Path pathlist = Path(directory_in_str).rglob('*.asm') for path in pathlist: # because path is object not string path_in_str = str(path) # print(path_in_str) Original answer: import os for filename in os.listdir("/path/to/dir/"): if filename.endswith(".asm") or filename.endswith(".py"): # print(os.path.join(directory, filename)) continue else: continue
End of preview.