Python miscellany

special parameters and arguments

A function may have four kinds of parameters:

  • The name of a positional parameter is written unadorned.
  • The name of a default parameter is written with a = followed by an expression:  name=expression
  • The name of a list parameter is preceded by *:   *name
  • The name of a dictionary parameter is preceded by **:   **name

(You can put spaces around the =, *, or **, but leaving no spaces is the most common style.)

For example, we might write a function with all four kinds of parameter:

def foo(a, b, c, d=x, e=y, f=z, *g, **h): …

The above uses three positional parameters (a, b, c), three default parameters (d, e, f), a list parameter, and a dictionary parameter. (Again, there can only be one list parameter and only one dictionary parameter.)

Arguments also come in four kinds—positional, keyword, sequence-expansion, and dictionary-expansion arguments—and they are written with the same notation and order:

  • A positional argument is written unadorned.
  • A keyword parameter is written preceded with a = preceded by the name of a parameter:     name=argument
  • A sequence-expansion argument is preceded with *:    *argument
  • A dictionary-expansion argument is preceded with **:     **argument



(Again, you can put spaces around the =, *, or **, but leaving no spaces is the most common style.)

So a function call using all four of these types of arguments might read:

foo(a, b, c, d=x, e=y, f=z, *g, **h)

The obvious thing to note is the neat parallel between how arguments and parameters are written. However, this neat parallel is somewhat misleading, for the arguments and parameters aren’t grouped like-with-like by how they appear. Keyword arguments, for instance, do not all necessarily get passed to default parameters even though they look the same. Let’s look at how these different kinds of arguments and parameters actually work:

default parameters

A default parameter is so called because it includes a default value: if a call to the function includes no argument for the parameter, the parameter takes its default value:

def foo(x, y=4, z=2): …
foo(3, 7, 9)      # the parameters get the values: 3, 7, 9
foo(3, 7)         # the parameters get the values: 3, 7, 2
foo(3)            # the parameters get the values: 3, 4, 2
foo()             # exception: the parameter x must be passed a value)

Notice that the positional arguments always get passed to the parameters left-to-right, so we can’t provide a positional argument for z without providing one for y.

Now consider if we create a function with a list parameter as well:

def foo(a, b=4, c=2, d=’hi’, *e): …
foo(3, 8, 11)                 # the parameters get the values: 3, 8, 11, ‘hi’, []
foo(3, 8, 11, ‘bye’, 7, 1)    # the parameters get the values: 3, 8, 11, ‘bye’, [7, 1]
foo(3, 8, 11, ‘bye’, 7)       # the parameters get the values: 3, 8, 11, ‘bye’, [7]
foo()                         # exception: the parameter a must be passed a value)

So given these parameters, we can invoke this function with as many arguments we like as long as we provide at least one. The fifth argument and beyond get passed together in a list to e; if we provide fewer than five arguments, e will simply get an empty list.

The gotcha with default parameters is that the value expressions are evaluated only when the function is created, and so the same object gets used every time the function is called and the default value gets used:

def foo(x=[]):
    return x
foo()             # [2]
foo()             # [2, 2]
foo()             # [2, 2, 2]

Above, you might assume that x should have a new empty list each time its default value is used, but the same list object is used each time, and so its initial contents here differ at the start of each call.

keyword arguments

A keyword argument specifies its parameter by name rather than position:

def foo(x, y, z): …
foo(3, z=7, y=9)        # same as foo(3, 9, 7)
foo(z=7, y=9, 3)        # exception: keyword arguments cannot precede positional arguments
foo(z=7, y=9, x=3)      # same as foo(3, 9, 7)
foo(3, z=7)             # exception: the parameter y must be passed a value

A general rule to keep in mind is that all positional parameters must always be supplied an argument in one form or another. Also remember that keyword arguments must all be written after the positional arguments.

Positional arguments get matched to parameters first before any keyword arguments get matched. If a keyword argument matches a parameter already matched with a positional argument, an exception is thrown:

def foo(x, y, z): …
foo(3, z=7, x=4, y=9)         # exception: multiple values for parameter x

It’s important not to get hung up on the superficial similarity of appearance between default parameters and keyword arguments. There really is no privilidged connection between the two. Understand though that keyword arguments can pass values to default parameters just as readily as to positional parameters:

def foo(a, b=4, c=2, d=’hi’): …
foo(3, 8, d=’bye’)                  # the parameters get the values: 3, 8, 2, ‘hi’
foo(3, d=’bye’, c=17)               # the parameters get the values: 3, 4, 17, ‘bye’
foo(d=’bye’, a=9, c=17)             # the parameters get the values: 9, 4, 17, ‘bye’

However, keyword arguments cannot name list or dictionary parameters.

dictionary parameters

Just like a list parameter absorbs any excess positional arguments, a dictionary parameter absorbs any excess keyword arguments:

def foo(x, y, **z): …
foo(3, y=9)                         # the parameters get the values: 3, 9, {}
foo(y=9, x=3, nick=’yo’, ellis=2)   # the parameters get the values: 3, 9, {‘nick’: ‘yo’, ‘ellis’: 2}
foo(3, 9, ellis=2)                  # the parameters get the values: 3, 9, {‘ellis’: 2}

Here we include both a list parameter and a dictionary parameter:

def foo(a, *b, **c): …
foo(3, 8, 2, joan=2)                # the parameters get the values: 3, [8, 2], {‘joan’: 2}

sequence-expansion and dictionary-expansion arguments

The sequence-expansion argument passes the items of a sequence as additional positional arguments to the call. Though the sequence-expansion argument is always written after the keyword arguments, you should think of the items of the sequence as being tacked on to the list of positional arguments:

foo(3, 7, g=9, h=2, *[8, 4])           # same as foo(3, 7, 8, 4, g=9, h=2)

The dictionary-expansion argument passes the items of a dictionary as additional named arguments to the call.

foo(3, 7, g=9, h=2, **{‘bar’: 4, ‘ack’: 3})       # same as foo(3, 7, g=9, h=2, bar=4, ack=3)

The keys of the expanded dictionary must all be strings which are valid identifiers:

foo(3, 7, g=9, h=2, **{‘@ bar !’: 4})             # exception: ‘@ bar !’ is not a valid name

Remember that the positional arguments get matched to parameters before any keyword arguments get matched, and an exception is thrown if a keyword argument matches a parameter already given a value:

foo(3, y=9 **{‘y’: 4})              # exception: y named twice

keyword-only parameters

Python actually allows for a fifth kind of parameter, which actually comes in two variants (so arguably there are six kinds of parameters in total).

A keyword-only parameter, as the name implies, is a parameter required to get its argument from a keyword argument, not a positional argument. Keyword-only parameters are distinguished by being written after the list parameter:

def foo(a, *b, c, d): …             # c and d are keyword-only parameters
foo(13, d=32, c=’hi’)               # the parameters receive: 13, [], ‘hi’, 32
foo(13, 8, -2, 15, 2, c=’hi’)       # exception: d must be given an argument by name

If you want a function with keyword-only parameters but no list parameter, simply write an * with no name:

def foo(a, *, b, c): …              # three parameters: a (positional), b and c (keyword-only)

Keyword-only parameters can be given default values just like a default parameter:

def foo(a, *, b=3, c): …            # keyword-only parameter b has the default value 3
foo(4, c=9)                         # the parameters receive: 4, 3, 9

Notice that we can write the keyword-only parameters in any order, including those with default values.


A slice object is used to represent a portion of a sequence using a start and end index. (Be clear that a slice object itself is composed of just a couple indices, not any actual items.) A slice is created by invoking the slice class (slice in the builtins module):

slice(3, 5)       # a slice representing index 3 up to (but not including) index 5
slice(0, -2)      # a slice representing index 0 up to (but not including) index -2

Note the assymetry: a slice represents the portion of a sequence starting at the first specified index and including everything up to—but not including—the second specified index. So for example, a slice of 3 to 6 covers indexes 3, 4, and 5, but not 6. So to get a slice that includes the last item of a sequence, specify an end index that is one greater than the last index. Altenatively, a slice with an end index of None represents a portion of a sequence all the way up through the end, whatever the sequence’s length.

Several sequence operations accept a slice argument. The __getitem__ method, for example, returns a new sequence which is the portion of the original sequence represented by the slice argument:

a = [6, 2, 14, 7, 88]
a[slice(1, 4)]                # [2, 14, 7]       (a.__getitem__(slice(1, 4)))
a[slice(0, -2)]               # [6, 2, 14]
a[slice(2, 55)]               # [14, 7, 88]
a[slice(30, 40)]              # []

(Notice that the slice indices may lie beyond the bounds of the sequence, as we see above in the last two examples.)

The __setitem__ method also may take a slice argument, in which case it removes one or more items from the list and inserts others in that place: the slice argument specifes what portion gets removed, and a sequence argument provides the items to be inserted there:

a = [6, 2, 14, 7, 88]
a[slice(1, 4)] = [55, 99]                 # a.__setitem__(slice(1, 4), [55, 99])
a                                         # [6, 55, 99, 88]

Notice the number of items inserted into the list need not equal the number of items removed. In fact, the number of items removed or inserted may be zero:

a = [6, 2, 14, 7, 88]
a[slice(1, 1)] = [55, 99]
a                                         # [6, 55, 99, 2, 14, 7, 88]
a = [6, 2, 14, 7, 88]
a[slice(1, 4)] = []
a                                         # [6, 88]

While we can just use __setitem__ to remove multiple items, we can also invoke the __delitem__ method with a slice argument to do the same:

a = [6, 2, 14, 7, 88]
del a[slice(1, 4)]                        # a.__delitem__(slice(1, 4))
a                                         # [6, 88]

When creating a slice object, you can optionally specify a “step”, which designates how many places to advance between indices. By default, the step is 1; a step of 2 will effectively skip over every other item; a step of 3 will effectively step over every two items; etc.

a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
a[slice(0, None, 2)]                # [1, 3, 5, 7, 9, 11]
a[slice(0, 8, 3)]                   # [1, 4, 7]
a[slice(0, 9, 4)]                   # [1, 5]

We can actually specify a negative step (in which case the start index should be a later index than the end index):

a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
a[slice(5, 0, -1)]                        # [6, 5, 4, 3, 2]
a[slice(10, 3, -2)]                       # [11, 9, 7, 5]

How can we include the first item of the sequence when the step is negative? If we specify an end index of 0, the item at index 0 is not itself included, and we can’t specify an index of -1 because that’s a special way of expressing the index of the last item. The solution is to use None:

  • When the step is positive, a None starting index designates the start of the sequence while a None end index designates the one past the end.
  • When the step is negative, a None starting index designates the end of the sequence while a None end index designates the start (including the first item).

So to include the first item in a slice with a negative step, we write:

a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
a[slice(5, None, -1)]                     # [6, 5, 4, 3, 2, 1]
a[slice(None, None, -3)]                  # [12, 9, 6, 3, 1]

When passing a slice with a step other than +1 to  __setitem__, the number of items inserted must equal the number removed:

a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
a[slice(3, 7, 2)] = [‘hi’, ‘bye’]         # OK
a                                         # [1, 2, 3, ‘hi’, 5, ‘bye’, 7, 8, 9, 10]
a[slice(3, 7, 2)] = [‘yo’]                # exception: too few items added

For creating slices in the [] operator, Python has a special syntax of three numbers separated by colons: first the start index, then the end index, and then the step:

a[3:7:2]                      # a[slice(3, 7, 2)]
a[7:3:-2]                     # a[slice(7, 3, -2)]
a[3:7]                        # a[slice(3, 7, 1)]

When omited, the step defaults to +1. The start and end index can also be omited, in which case they default to None, but the first colon must always be written to denote that this is a slice:

a[:7]                         # a[slice(None, 7, 1)]
a[3:]                         # a[slice(3, None, 1)]
a[:]                          # a[slice(None, None, 1)]
a[::3]                        # a[slice(None, None, 3)]

break and continue

Just like Javascript, loops in Python can include break and continuestatements, which work just the same.

else clauses with whilefor, and try

while or for-in loop may end with an else clause, which is executed after the loop except when the loop exits viaa break statement.

for x in range(5):
    …             # a break executed here causes the else to get skipped over

try may also include an else clause after the exception clauses but before the finallyclause (if any):

    …           # a break executed here causes the else to get skipped over
exception Cat:

The else executes after exiting the try the normal way (i.e.not because of an exception).

del statements

As previously discussed, the del (‘delete’) statement used with the [] operator is special syntax for invoking an object’s __delitem__method to delete an item from a collection:

x = [1, 2, 3]
del x[1]
x                       # [1, 3]

delstatement can also be used to remove an object attribute:

del               # remove the attribute foo of object x by invoking x.__delattr__(‘foo’)

Lastly—and somewhat strangely—a delstatement can be used to remove a variable from the current scope:

del x                   # remove variable x of the current scope

It’s a bit puzzling to imagine why you would want to remove variables. If a variable should be removed, why is it there in the first place?

built-in functions

Some standard functions are implemented specially as built-in functions. A built-in function object represents a function which is created specially by the interpreter, either for efficiency reasons and/or because the function requires facilities of the machine not exposed in the semantics of Python. For example, as discussed, the + operator invokes a method named __add__ to do its business. This method itself of course can’t use +to perform the addition (because that would be circular), and so it must invoke some machine code. However, we can’t normally create a function in Python that invokes machine code, so the method must be implemented specially in the interpreter as a built-in function.

classes and functions of the builtins module

The builtins module contains references to the standard classes we’ve already discussed: str, int, bool, object, type, list, tuple, slice, dict. The module also contains several other classes we’ll introduce later. The most commonly used functions of builtins include: abspowround, and divmod The absfunction returns the absolute value of a number:

abs(5.3)                # 5.3
abs(-5.3)               # 5.3

The powfunction returns the first argument raised to the power of the second argument:

pow(3, 2)               # 9
pow(3, 3)               # 27

Alternatively, Python also includes a ** operator for exponentiation:

3 ** 2                        # 9
3 ** 3                        # 27

The pow function, however, allows for an optional third argument which makes powreturn the modulo of the result of the exponentiation:

pow(3, 3, 4)                  # 3 (same as 3 ** 3 % 4)

(Apparently this is something done fairly commonly, and pow implements the combination of these two operations in a way that is considerably more efficient.) The roundfunction rounds its argument to the nearest integer:

round(3.5)              # 4
round(3.49)             # 3

You can specify the number of digits past the decimal point to round:

round(3.45555, 4)             # 3.4556
round(3.45555, 3)             # 3.456
round(3.45555, 2)             # 3.46
round(3.45555, 1)             # 3.5
round(3.45555, 0)             # 3

The divmodfunction returns a tuple with the result of integer division and the result of modulo:

divmod(10, 3)                       # (3, 1)
divmod(30, 4)                       # (7, 2)

chr and ord Given a Unicode code point, the chrfunction returns a string consisting of just that character:

chr (65)                    # ‘A’
ord(‘Ü’)                      # 220

Given a single-character string, the ordfunction returns the Unicode code point of that character:

ord(‘A’)                      # 65
ord(‘Ü’)                      # 220

help and dir Intended for use in interactive mode, the help function invokes the interactive help system at the console. When provided a string argument, help attempts to find a docstring of an object matching that name (such as a class, module, or function). When provided any other kind of argument, helpprints the docstring of the argument:

help()                        # start interactive help
help(‘foo’)             # prints the docstring of something named foo help(x)                 # prints the docstring of x help(help)              # prints the docstring of the help function itself

The dir function is also mainly useful for curious programmers in interactive mode. The dirfunction returns a list of strings of all the names of attributes of an object (including all the attributes it inherits):

dir(x)             # return a list of the attribute names of object x (including those that it inherits)
dir(str)           # return a list of the attribute names of the str class (including those that str inherits)

For some special cases, the list returned by dir may not exhastively include all attributes. For module objects, for example, the list returned by dir only includes the attributes of the module object itself, not any attributes inherited from the module class. (For more details of dir’s special behaviors, consult the Python documentation.) bin, hex, and oct The binfunction returns a string of a number in binary form:

bin(23)                       # ‘0b10111’

The hexfunction returns a string of a number in hex form:

hex(23)                       # ‘0x17’

The octfunction returns a string of a number in octal form:

oct(23)                       # ‘0o27’

isinstance and issubclass The isinstance function returns Trueif its first argument is an instance of its second argument (a class):

isinstance(23, int)                     # True
isinstance(‘hello’, dict)               # False
isinstance(‘hello’, object)             # True (‘hello’ is a string, and the str class inherits from object)
isinstance(dict, type)                  # True (all classes are instances of type)
isinstance(dict, object)                # False (dict inherits from object but is not an instance of object)

The issubclass function returns Trueif its first argument (a class) is a descendant of its second argument (another class):

issubclass(str, object)             # True (all types ultimately inherit from object)
issubclass(type, object)            # True
issubclass(object, type)            # False (object is an instance of type but does not inherit from it)

minmaxsorted, and sum The minfunction returns the smallest item in a collection:

min([35, 18, -3])                         # -3

The maxfunction returns the largest item in a collection:

max([35, 18, -3])                         # 35

The sortedfunction takes an iterable object and returns a new list of the items in sorted order:

sorted((35, 18, -3))                      # [-3, 18, 35]

(Notice that sorted works on an immutable type because it returns a new list instead of modifying the argument. Also note that this makes calling sorted on a list different than calling the sort method: sorted makes a new list whereas sort modifies the existing list.) If the items of the collection are not all comparable with each other, the minmax, and sorted functions throw an exception. The sumfunction returns the sum of a collection of numbers:

sum((35, 18, -3))                         # 50

An optional second argument specifies another number to add:

sum((35, 18, -3), 4)                      # 54

print and input The print function sends its arguments as strings to a file. By default, this is standard output (as represented by stdout in the sys module), but you can specify another file object by providing a keyword argument file. (We’ll discuss standard output and files in depth in a later module. For now you can think of print as just making text appear on the console. ) The input function reads text from standard input (text entered by the user at the console) and returns it as a string. A call to inputdoesn’t return until a complete line has been read from standard input (so usually this means the program waits until the user finishes typing and hits enter).

input()                             # read line from standard input and return it as a string

As convenience, a string argument to inputgets printed to standard output, effectively prompting the user:

input(‘How old are you?’)                 # asking the user to type their age

exit and quit Invoking exit or quit throws a SystemExit exception, which we generally let propogate to the top level of code and terminate the program. Effectively, you call either of these when you wish to end the program early without completing the main module.


A set is another standard mutable collection type, in this case an unordered collection of distinct hashable items. The set type includes many of the same mapping operations of dictionaries, but they are not considered proper mappings because they have no concept of keys-value pairs.

Invoking the set class (set in the builtins module) creates a set object:

set()                               # a new empty set
set([6, 4, 6, 6, ‘hi’])             # a set with the values 6, 4, and ‘hi’

Notice that the set only contains one 6, not three 6’s.

Sets can also be created with a literal syntax of {}:

{,}                                 # a new empty set
{6, 4, 6, 6, ‘hi’}                        # a set with the values 6, 4, and ‘hi’

Notice that the empty set must be denoted with a comma inside to distinguish it from an empty dictionary.

The add method adds a new item to the set, but if the new item is equal (not necessarily identical!) to any already in the set, it is not added:

a = {6, 4, 6, 6, ‘hi’}                    # a set with the values 6, 4, and ‘hi’
a                                         # a set with the values 6, 4, ‘hi’, and 5

For the remaining set operations, consult the Python docs.

set and dictionary comprehensions

A set comprehension is just like a list comprehension, except it is denoted in {} rather than () and returns a set of the produced items rather than a list:

{x ** 2 for x in [5, 3, 5, 10]}                       # {25, 9, 100}

A dictionary comprehension produces a dictionary, and so the output includes a key and value:

{key: value for target in sequence}

For example:

{x: x ** 2 for x in [5, 3, 5, 10]}              # {5: 25, 3: 9, 10: 100}

(Notice that the key 5 comes up twice, but only appears once in the dictionary. What happens when the same key comes up more than once is that each produced item overwrites any previously produced item with the same key.)


A function containing one or more yield statements is a special kind of function, one that returns a generator object, which represents a suspended call to the function. When first created, the call represented by the generator starts off suspended at the very start of the function. To resume execution of the call, invoke the __next__ method of the generator. The resumed call runs until a yield or return statement is encountered: yield suspends the call again and returns a value; a return statement throws a StopIteration exception without returning any value.  Effectively, a generator is like an iterator where the values are produced from a function. For example:

# foo is a regular function object, but calling the function produces a generator rather than executing the body
def foo(x):
    for y in [5, 2, 4]:
        yield x + y

a = foo(3)        # create a generator with the argument 3
b = foo(100)            # create a second generator, with the argument 100
# recall that the builtins function next invokes an object’s __next__ method
next(a)           # 8 (and prints 3)
next(a)           # 5
next(b)           # 105 (and prints 100)
next(b)           # 104
next(a)           # 7
next(a)           # exception: StopIteration

(This example would work the same if we left the return at the end implicit. Also understand that a return in a generator function may not include any argument because it doesn’t really return any value.)

Understand that nothing requires us to run a generator to completion; a generator may end up discarded before its call completes or in fact before it is ever run at all. (If you’re wondering what execution of a generator looks like in memory, the locals are not stored on the call stack as normal, for it would be awkward to indefinitely keep around the frame of a suspended function call.)

Generators also include a method send. To understand send, first understand that yield is actually an operator, not a statement. A yield operation normally returns the value yielded in the previously suspended run of the function. For example:

def foo():
    x = 50
    for y in [1, 2, 3]:
        x = yield (x + y)

a = foo()
next(a)           # 51
next(a)           # 53  (the resumed function starts by assigning 51 to x)
next(a)           # 56  (the resumed function starts by assigning 53 to x)
next(a)           # prints 56, then throws StopIteration exception  (the resumed function starts by assigning 56 to x)

The generator method send does the same thing as __next__, except the argument to send replaces the value returned by yield:

a = foo()
next(a)           # 51
a.send(100)       # 102 (the resumed function starts by assigning 100 to x)
next(a)           # 105 (the resumed function starts by assigning 102 to x)
a.send(33)        # prints 33, then throws a StopIteration exception (the resumed function starts by assigning 33 to x)

Generators, to begin with, are a fairly obscure part of Python, let alone this send method. You may end up writing plenty of Python code but never use these features.

generator expressions

A generator expression looks like a list comprehension but produces a generator instead of a list. Calls to __next__ return the successive items until StopIteration is thrown. A generator expression is surrounded in () rather than []:

a = (x ** 2 for x in [5, 3, 2, 10] if x != 3)
next(a)           # 25
next(a)           # 4
next(a)           # 100
next(a)           # StopIteration exception

(Note that this is aggravatingly yet one more use of parens to mean something entirely different.)

Be clear that a generator expresion produces a single generator. A function containing yield, on the other hand, is still a function, but one which creates a generator each time it is called.


As a convenient way to assign one or more items from a sequence to variables, Python has a feature called destructuring. Destructuring is notated by writing the target variables in a list or tuple that mimics the sequence from which we’re grabbing items:

(a, b, c) = [5, 2, 9]
a                             # 5
b                             # 2
c                             # 9

While superficially this appears to be assigning to a tuple, there really is no tuple created by the target expression. It doesn’t matter if the target is a list or tuple: both accomplish the very same thing.

The number of variables in the destructuring target must match the number of items in the sequence:

(a, b, c) = [5, 2]                  # exception: too few items in the sequence
(a, b, c) = [5, 2, 9, 3]            # exception: too many items in the sequence

However, we may precede one of the target variables with an * to make it glom up all excess items into a list:

(a, *b, c, d) = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
a                             # 1
b                             # [2, 3, 4, 5, 6, 7, 8]
c                             # 9
d                             # 10

(Note that unlike with function list parameters, the * can be used on any of the target variables and so glom up excess items from the start or middle of the sequence, not just the end.)

Destructuring can get even more complicated because the items grabbed from the sequence may themselves be sequences which we can destructure:

((a, b), c, d) = [[5, 2], 9, 3]
a                             # 5
b                             # 2
c                             # 9
d                             # 3

For the destructuring target above to work, the value expression of the assignment must return a sequence with three items, and the first item must itself be a sequence with two items. Alternatively, we could just write more than one destructuring assignment to get the same effect:

(x, c, d) = [[5, 2], 9, 3]
(a, b) = x
a                             # 5
b                             # 2
c                             # 9
d                             # 3

Destructuring can also be used in other assignment-like contexts: for-ins, list comprehensions, and generator expressions. The subtle difference, however, is that it is each item of the provided input sequence which is destructured, not the sequence itself. For example:

for (x, y) in [(3, 5), (4, 2)]:
    print(x + y)

This prints first 8 then 6: in the first iteration 3 is assigned to x and 5 to y; in the second iteration, 4 is assigned to x and 2 to y.

implicit parens for tuples and generator expressions

In the context of assignment, the parens that surround a tuple or generator can be left implicit. For example:

a = j, k, l                         # a = (j, k, l)
b = x for x in [3, 1, -7]           # b = (x for x in [3, 1, 7])

This also works for the target of assignment when using destructuring:

x, y, z = a                        # (x, y, z) = a
for x, y, z in a: …                # for (x, y, z) in a: …

A generator passed as the sole argument can also leave the parens implicit:

foo(x for x in [3, 1, -7])        # foo((x for x in [3, 1, -7]))

I deeply dislike these allowances, as they are trivial conveniences that only create confusion for learners.

with statement

The with statement takes this general form:

with context:

…where context is an expression returning a ‘context manager’ object—an object with __enter__ and __exit__ methods. When a with statement is executed:

  1. context is evaluated to get a context manager object
  2. the object’s __enter__ method is run
  3. the body is run
  4. the object’s __exit__ method is run

The key feature of with, however, is that the __exit__ method gets run no matter what happens in the body: if an exception is thrown in the body, the exception is passed as argument to __exit__ before the exception continues to propogate.

The prime example of a context manager object is a file object. As we’ll discuss in a later unit, when opening a file, we want to make sure it gets closed when we’re done with it. The with construct helps ensure we do this properly.

Optionally, the context manager of a with can be assigned to a variable in an as clause so that we may refer to it in the body:

with context as target:             # assigns the context manager to target 

with also allows us to use more than one context manger by separating them with commas (each optionally with an as clause):

with foo() as x, bar(), ack() as y:

This is effectively the same as nesting multiple with statements:

with foo() as x:
    with bar():
        with ack() as y:


Python’s assert statement has this form:

assert condition, arguments


The condition is evaluated, and if false, an AssertionError is thrown with the comma-separated list of arguments passed to its constructor. For example:

assert x == y, ‘hello’, foo()


…is functionally the same as writing:

if not (x == y):
    raise AssertionError(‘hello’, foo())

(AssertionError is found in the builtins module.)

The one difference is that assert statements are only executed when __debug__ in the builtins module has the value True.

__debug__ = False               # ignore assert statements henceforth
assert x == y, ‘hello’, foo()   # skipped over

The Python interpreter sets __debug__ to True by default. Rather than modify this variable in code, the preferred way to turn off asserts is to run the Python interpreter with the –O flag or –OO flag set.

A very subtle quirk of assert is that, when __debug__ is not True, Python will not include assert statements in the bytecode of any .pyc files it produces, so __debug__ will not have any effect upon modules loaded from .pyc files with the assert statements missing.


conditional operator (if-else expressions)

An ifelse can appear as an expression in this form:

expression1 if condition else expression2


This is Python’s equivalent of Javascript’s ?: operator, e.g.:

‘hello’ ? 9 > 5 :’bye’        // Javascript
‘hello’ if 9 > 5 else ‘bye’   # Python


Like with ?:, best practice is to always surround these if-else expressions in parens to avoid any question about precedence. Besides, explicit parens make these expressions much easier to read:

3 + (42 if x == y else 77)          # add 3 to 42 or 77 depending upon whether x == y

bitwise operators

Python has binary operators for bitwise operations upon integers.

  • ~             (bitwise not)
  • &             (bitwise and)
  • |              (bitwise or)
  • ^             (bitwise xor)

Wikipedia gives a good explanation of these operations.

iterator-like types in builtins


The map class is an iterator-like type that is a bit confusingly named as it concerns sequences, not mappings. Creating a map object requires a function argument followed one or more iterable objects (meaning iterators and/or sequences). A map object produces values by invoking its function with an item from each of its iterables, e.g.:

map(pow, [3, 4, 6], [2, 3, 2])          # a map which will produce: 9, 64, 36

Note that the number of iterables determines the number arguments passed to the function; here pow gets invoked each time with two arguments, the first from the first iterable, the second from the second iterable.

The iterable arguments need not all have the same number of items: the map produces as many values as the shortest iterable:

map(pow, [3, 4], [2, 3, 2, 4])        # a map which will produce: 9, 64


The reversed class, another iterator-like type, is instantiated with a sequence, and it produces the values of its sequence in reverse order:

reverse([3, 4, 5])              # a reversed which will produce: 5, 4, 3


The zip class, another iterator-like type, is instantiated with one or more iterable arguments, and it produces tuples composed of one item from each of the iterables:

zip([7, 52], [3, 4], [2, 3])       # a zip which will produce: (7, 3, 2), (52, 4, 3)

The iterable arguments need not all have the same number of items: the zip produces as many values as the shortest iterable:

zip([7, 52], [3, 4, 7, -11], [2, 3])       # a zip which will produce: (7, 3, 2), (52, 4, 3)


The enumerate class, another iterator-like type, is instantiated with one iterable argument, and it produces tuples with one item from the iterable preceded by its index:

enumerate([‘hello’, ‘bye’, 45])       # an enumerate which will produce: (0, ‘hello’), (1, ‘bye’), (2, 45)


The filter class, another iterator-like type, is instantiated with a function and an iterable, and it produces the values of the iterable which, when passed to the function, cause the function to returna true value:

filter(len, [‘hi’, (), ‘hello’, ‘’])    # a filter which will produce: ‘hi’, ‘hello’

(Above, the len function returns 0 for the emtpy tuple and empty string, and 0 is considered false, so these values are not produced from the filter.)

Comments are closed.