Archive

Category Archives for "Python"

“Python Workout” is Manning’s Deal of the Day!

Good news: My book, “Python Workout,” is almost done; I’m working on the videos and final edits, and it’ll soon be available in its final form from Manning, both online and in print.

Better news: “Python Workout” is today’s “Deal of the Day,” along with two other Python books: “Data Science Bookcamp” and “Practices of the Python Pro.”

That means that for today (May 25th) only, you can get 50% off any of these books.

If you want to improve your Python skills, then you should definitely take a look at this books — and save some money, if you buy them in the coming 24 hours.

Just go to https://www.manning.com/dotd to learn more and get these books at half off.

Understanding bitwise operations in Python

Have you ever wondered about bitwise operations in Python? They’re not that common nowadays, but they are still in the language, and can be useful in some cases.

A subscriber to my “Better developers” newsletter asked me to explain these. I made a video doing so:

Here’s the Jupyter notebook I used in creating the video, which you can download and use yourself:

If you have Python questions, then send them to me at reuven@lerner.co.il, or on Twitter as @reuvenmlerner. I’ll try to answer them, here or in my newsletter.

Today’s lesson in “Python for non-programmers”: Dictionaries

My free, weekly “Python for non-programmers” course continues!

Our topic for today: Dictionaries, the most important data structure in Python.

If you’ve always wanted to program, then it’s still not to late to join us. The course is completely free of charge, and gives you free access to the course recordings (including previous lessons), forever — as well as to our private forum.

Join me at https://PythonForNonProgrammers.com/!

Questions or comments? Contact me at reuven@lerner.co.il, or on Twitter as @reuvenmlerner.

3

Making sense of generators, coroutines, and “yield from” in Python

Consider the following (ridiculous) Python function:

def myfunc():
    return 1
    return 2
    return 3

I can define the function, and then run it. What do I get back?

>>> myfunc()
1

Not surprisingly, I get 1 back. That’s because Python reaches that first “return” statement and returns 1. There’s no need to continue onto the second and third “return” statements. (Actually, from Python’s perspective, those latter two statements don’t even exist; they are removed from the bytecode altogether at compilation time.)

What happens if I write my function a bit differently, using “yield” instead of “return“?

def myfunc():
    yield 1
    yield 2
    yield 3

If I run my function now, I get the following:

>>> myfunc()
<generator object myfunc at 0x10a92d450>

That’s right: Because I used “yield” instead of “return”, running the function doesn’t execute the function body. Rather, I get back a generator object, meaning something that implements the iterator protocol. For this reason, the second kind of function (using “yield”) is called a “generator function,” although you’ll often hear people describe them as “generators.”

Because generators (i.e., the objects returned by generator functions) implement the iterator protocol, they can be put into “for” loops:

for one_item in myfunc():
    print(one_item)

What do we get back?

1
2
3

How does this work? With each iteration, the body of the generator function is executed. If there’s a “yield” statement, then that value is returned to the “for” loop. And then, most significantly, the generator goes to sleep, pausing immediately after that “yield” statement executes. When the next iteration occurs, the function wakes up at the point where it was paused, and continues running, as if nothing at all had happened.

In other words:

  • A generator function, when executed, returns a generator object.
  • The generator object implements the iterator protocol, meaning that it knows what to do in a “for” loop.
  • Each time the generator reaches a “yield” statement, it returns the yielded value to the “for” loop, and goes to sleep.
  • With each successive iteration, the generator starts running from where it paused (i.e., just after the most recent “yield” statement)
  • When the generator reaches the end of the function, or encounters a “return” statement, it raises a StopIteration exception, which is how Python iterators indicate that they’ve reached the end of the line.

We can simulate this all ourselves, as follows:

>>> g = myfunc()
>>> next(g)
1
>>> next(g)
2
>>> next(g)
3
>>> next(g)
StopIteration

The “next” built-in function is how Python asks an iterator for … well, for the next object that it wants to produce. The response to “next” can either be an object or the StopIteration exception.

This kind of generator function can be quite useful: You can use it for caching, filtering, and treating infinite (or very large) data sets in smaller chunks.

But used in this way, generator functions are for one-way communication. We can retrieve information from a generator using “next”, but we cannot interact with it, modify its trajectory, or otherwise affect its execution while it’s running. (That’s not entirely true: The “throw” method allows you to force an exception to be raised within the generator, which you can use to affect what the generator should do.)

A number of years ago, Python introduced the “send” method for generators, and a modification of how “yield” can be used. Consider this code:

def myfunc():
    x = ''
    while True:
        print(f'Yielding x ({x}) and waiting…')
        x = yield x
        if x is None: 
            break
        print(f'Got x {x}. Doubling.')
        x = x * 2

The above code looks a bit weird, in that “yield’ is on the right side of an assignment statement. This means that “yield” must be providing a value to the generator. Where is it getting that value from?

Answer: From the “send” method, which can be invoked in place of the “next” function. The “send” method works just like “next”, except that you can pass any Python data structure you want into the generator. And whatever you send to the generator is then assigned to “x”.

Now, you need to “prime” it the first time with “next”, rather than “send”. But other than that, it works just like any other generator — except that whatever you “send” will then be a part of the coroutine. As before, each invocation of next/send will execute all of the code until and including the “yield” statement.

It might seem weird, but because the “yield” is on the right side of an assignment operator, and because the right side of assignment always executes before the left side, the generator goes to sleep after returning the right side, but before assigning any value to the left side. When it wakes up, the first thing that happens is that the sent value is assigned to the left side.

Here’s how that can look:

>>> g = myfunc()
>>> next(g)
Yielding x () and waiting...
''
>>> g.send(10)
Got x 10 Doubling.
Yielding x (20) and waiting...
20
>>> g.send(123)
Got x 123 Doubling.
Yielding x (246) and waiting...
>>> g.send(None)
StopIteration

Now, this is admittedly pretty neat: Our coroutine hangs around, waiting for us to give it a number to dial.

For a long time, it seemed like such coroutines were solutions looking for problems. After all, what can you do with such a thing? From what I can tell, people in the Python world were excited about this sort of idea, but aside from a handful who really understood the potential, coroutines were ignored and seen as somewhere between weird and esoteric.

(I should add that the term “coroutine” has changed ts meaning somewhat in the Python world over the last few years, as the “asyncio” library has gained in popularity. I have nothing against asyncio, and have been increasingly impressed with what it does, and how it does it. But that’s not the sort of coroutine I’m talking about here. Note that asyncio’s coroutines started off as generators, and there are still many things to understand in asyncio via generators. But that’s not my topic here.)

So, where do you use a generator-based coroutine? How can you think about it?

My suggestion: Think of it as an in-program microservice. A nanoservice, if you will, available to your Python program.

Why do I say this? Because the moment that you think of it this way, what you do and don’t want to do with coroutines becomes much clearer.

  • Want to communicate with a database? Use a coroutine, whose local variables will stick around across queries, and can thus remain connected without using lots of ugly global variables. Send your SQL queries to the coroutine, and get back the query results.
  • Want to communicate with an external network service, such as a stock-market quote system? Use a coroutine, to which you can send a tuple of symbol and date, and from which you’ll receive the latest information in a dictionary.
  • Want to automatically translate files from one format to another? Use a coroutine, which can take input in one encoding/format and produce output in another encoding/format.

Let’s create two simple coroutines that demonstrate how this can work. First, a Pig Latin translator, which will receive strings in English and will return them translated into Pig Latin:

def pl_sentence(sentence):
    output = []
    for one_word in sentence.split():
        if one_word[0] in 'aeiou':
            output.append(one_word + 'way')
        else:
            output.append(one_word[1:] + one_word[0] + 'ay')
    return ' '.join(output)

def pig_latin_translator():
    s = ''
    while True:
        s = yield pl_sentence(s)
        if s is None:
            break

Our service coroutine is “pig_latin_translator”, which uses the “pl_sentence” function to do its translation work. Let’s fire it up:

>>> g = pig_latin_translator()
>>> next(g) 
''
>>> g.send('this is a test')
'histay isway away esttay'
>>> g.send('hello')
'ellohay'

Amazing! Whenever we want to translate some English into Pig Latin, we can do so with our translator, sitting in memory and waiting to serve us. Perhaps this isn’t the most elegant or sophisticated use of coroutines, but it certainly works.

Let’s look at another example: A corporate support chatbot. You know, the sort of thing that appears on a company’s Web site, allows you to enter your complaints, and then actually helps you. No, wait — that’s science fiction; in reality, such chat bots are always unable to help, while telling you how important you are. Let’s create such an unhelpful chatbot:

import random

def bad_service_chatbot():
    answers = ["We don't do that",
               "We will get back to you right away",
               "Your call is very important to us",
               "Sorry, my manager is unavailable"]
    yield "Can I help you?"
    s = ''
    while True:
        if s is None:
            break
        s = yield random.choice(answers)

This chatbot, as its name implies, waits for your input, and then ignores it entirely, returning a canned message meant to make you feel good about yourself and the service you’re getting. Of course, you don’t really feel good after such a conversation, but at least the company has saved on salaries, right?

But I digress.

Let’s see what happens when we run our chatbot:

>>> g2 = bad_service_chatbot()
>>> next(g2)
'Can I help you?'
>>> g2.send('I want to complain')
"We don't do that"
>>> g2.send("No, really. I want to complain.")
"Sorry, my manager is unavailable"

A number of years ago, Python introduced a new form of “yield”, known as “yield from“. And I have to say that the documentation and examples are … well, they make the simple case very obvious and easy to understand, but make the hard case quite difficult to understand. I hope that I can clear that up.

The basic idea is that if you have a function, it’s normal to call other functions from within it. That’s a standard technique in programming, one which allows us to write shorter, more specific functions, as well as to take advantage of abstraction.

But what if you have a generator that wants to return data from another generator, or any other iterable? You could do something like this:

def wrapper(data):
    for one_item in data:
        yield one_item

>>> g = wrapper('abcd')
>>> list(g)
['a', 'b', 'c', 'd']

In other words, we turn to “g”, our generator. And with each iteration, we ask it for its next element. What does the generator do? It invokes a “for” loop on “data”. So with each iteration, we’re asking “g”, and “g” is asking “data”. We can shorten this code with “yield from”:

def wrapper(data):
    yield from data

>>> g = wrapper('abcd')
>>> list(g)
['a', 'b', 'c', 'd']

We got the same result, even though the body of “wrapper” is now dramatically shorter. “yield from” basically lets us outsource the “yield” to another iterable, namely “data”. Our generator is basically saying, “I don’t want to deal with this any more, so I’ll just ask data to take over from here.”

This is the simple use case for “yield from”, and it’s not really very compelling. After all, did they need to add new syntax to the language in order to reduce our “for” loops? The answer is “no.”

So what is “yield from” used for? Consider the two coroutines that we wrote above, for Pig Latin and customer service. Imagine that the companies providing these services have now merged, and that we would like to have a single in-memory service that handles both of them. In other words, we would like to have a coroutine to which we can send “1” to translate Pig Latin and “2” to get customer service.

This all sounds fine, until we realize that we’re somehow going to need to get a value from the caller’s “send” method, and then pass it along to one of our coroutines. That’s going to look rather messy, no?

And so, the real reason to use “yield from” is when you have a coroutine that acts as an agent between its caller and other coroutines. By using “yield from”, you not only outsource the yielded values to a sub-generator, but you also allow that sub-generator to get inputs from the user. For example:

def switchboard():
    choice = yield "Send 1 for Pig Latin, 2 for support"
    if choice == 1:     
        yield from pig_latin_translator() 
    elif choice == 2:
        yield from bad_service_chatbot()
    else:
        return

Now, what happens if we invoke this?

>>> s = switchboard()
>>> next(s) 
'Send 1 for Pig Latin, 2 for support'
>>> s.send(1)
''
>>> s.send('hello')
'ellohay'
>>> s.send('are you awake')
'areway ouyay awakeway'

Fantastic, right? We’re calling “s.send” — meaning, our messages are being sent to the switchboard coroutine. But because it has used “yield from”, our message is passed along to “pig_latin_translator”. And when the translation is done, that coroutine yields its value, which bubbles up directly to the original caller.

Of course, I can also get customer support:

>>> s = switchboard()
>>> next(s)
'Send 1 for Pig Latin, 2 for support'
>>> s.send(2)
'Can I help you?'
>>> s.send('hello')
'Your call is very important to us'

Pretty nifty, eh? But we can do even better, allowing people to go back from our sub-generator to our main one, and then choose a different one:

def switchboard():
    while True:
        choice = yield "1 for PL, 2 for support, 3 to exit"
        if choice == 1:
            yield from pig_latin_translator()
        elif choice == 2:
            yield from bad_service_chatbot()
        elif choice == 3:
            return
        else:
            print('Bad choice; try again')

Here’s an example of how that would work:

>>> s = switchboard()
'Send 1 for PL, 2 for support, 3 to exit'
>>> next(s)
>>> s.send(2)
'Can I help you?'
>>> s.send('hi there')
'Sorry, my manager is unavailable'
>>> s.send('la la la')
"We don't do that"
>>> s.send(None)
'Send 1 for Pig Latin, 2 for support, 3 to exit'
>>> s.send(1)
''
>>> s.send('hello')
'ellohay'
>>> s.send(None)
'Send 1 for PL, 2 for support, 3 to exit'
>>> s.send(3)
StopIteration

So, what have we seen here?

  • Coroutines are like in-memory microservices, with state that remains across calls.
  • We use “next” to prime a coroutine the first time, and then use “send” to deliver additional messages.
  • If we want to provide a meta-microservice, or a coroutine that invokes other coroutines, then we can use “yield from”.
  • “yield from” connects the initial “send” method with the sub-coroutine, effectively passing through the coroutine that’s using “yield from”.

I hope that this helps you to consider when and how to use coroutines — and also how you can use “yield from” in your code in more sophisticated ways than just avoiding “for” loops.

Python for non-programmers continues!

The next session of my free, weekly, live “Python for non-programmers” course continues tomorrow, on May 8th.

You can sign up at https://PythonForNonProgrammers.com/. More than 1,700 people from around the world have already joined!

This week’s topics are:

  • Turning strings into lists (and vice versa)
  • Tuples

Anyone who joins gets access to all previous recordings, as well as to our private forum.

Questions or comments? Contact me at reuven@lerner.co.il, or as @reuvenmlerner on Twitter.

Become more fluent with Python functions in just 15 weeks

A new cohort of Weekly Python Exercise A2 (“Functions for beginners”) starts tomorrow — Tuesday, May 5th. If you’ve been using Python for less than one year, and want to write better, more powerful, more idiomatic functions that do more with less code — then this is the course for you.

WPE’s time-tested formula combines many elements — a weekly exercise, “pytest” tests, a private discussion forum, an extended solution and explanation, and live office hours — to push your Python skills ahead, and make you a more fluent developer.

Here’s what previous WPE students have had to say:

  • “WPE is the best investment one can make. There are free MOOCs out there. I tried, but stopped before the end because they don’t teach, they just show how to do some stuff.” — Jean-Pierre Bianchi
  • “The course was really excellent in every significant way.” — Doug Blanding
  • “I’ve learned more in a short time from your courses than I have from other big name courses.” — Alan O’Dannel

This cohort of WPE won’t be offered again until 2021. So if you want to level up your Python skills, then don’t delay! Learn more (and sign up) at https://WeeklyPythonExercise.com/.

Questions or comments? Just reach out to me at reuven@lerner.co.il, or on Twitter as @reuvenmlerner.

Reminder: My free, weekly “Python for non-programmers” course continues on Friday, May 1st

This is a reminder that my free, weekly “Python for non-programmers” course will continue tomorrow (Friday), May 1st, at 10 a.m. Eastern.

In this session, our 7th, we’ll talk abut lists! (This is more exciting than it might sound at first.)

The course is 100% free of charge and without obligation. All sessions are recorded and available to anyone who has enrolled — so it’s not too late to sign up and learn Python!

And if you cannot make the live sessions, you can always watch the recorded ones, and participate in our private forum.

More than 1,700 people have already joined. They’re learning to program — and so can you. Join us! Just register at PythonForNonProgrammers.com.

1

Working with warnings in Python (Or: When is an exception not an exception?)

It happens to all of us: You write some Python code, but you encounter an error:

>>> for i in 10:    # this will not work!
        print(i)

TypeError: 'int' object is not iterable

This isn’t just an error, but an exception. It’s Python’s way of saying that there was a problem in a defined way, such that we can trap it using the “try” and “except” keywords.

Just like everything else in Python, an exception is an object. This means that an exception has a class — and it’s that class we use to trap the exception:

try:
    for i in 10:
        print(i)
except TypeError as e:
    print(f'Problem with your "for" loop: {e}')

We can even have several “except” clauses, each of which looks for a different type of error. But every Python class (except for “object”) inherits from some other class, and that’s true for exception classes, as well. So if we want to trap both “KeyError” and “IndexError”, then we could name them both explicitly. Or we could just trap “LookupError”, the parent class of both “KeyError” and “IndexError”.

Python’s exception-class hierarchy is visible at https://docs.python.org/3/library/exceptions.html#exception-hierarchy, in the documentation for the Python standard library. It’s useful for knowing what exceptions exist in Python, how the hierarchy looks, and for generally understanding how exceptions work.

But if you look at the bottom of that hierarchy, you’ll see that there’s an exception class called “Warning,” along with a bunch of subclasses such as “DeprecationWarning” and “BytesWarning”. What are these? While they’re included along with the exception hierarchy, warnings are exceptions, but they’re neither raised nor used like normal exceptions. What are they, and how do we use them?

First, some history: Warnings have been around in Python for quite some time, since Python 2.1 (back in the year 2000). PEP 230 was written by Guido van Rossum (the original author of Python and the long-time BDFL), and was added not only to create a mechanism for alerting users to possible problems, but also for sending such alerts from within programs and for deciding what to do with them.

Why warnings?

Before I show you how you can use warnings in your own code, let’s first consider why and when you would want to use warnings. After all, you could always use “print” to display a warning. Or if something is really wrong, then you could raise an exception.

But that’s just the point: There are times when you want to get the user’s attention, but without stopping the program or forcing a try-except clause. And while “print” is often useful, it normally writes to standard output (aka “sys.stdout” in Python), which means that your warnings could get mixed up with the warnings themselves.

(And while I just wrote that you might want to get the user’s attention, I’d argue that most of the time, warnings are aimed at developers rather than users. Warnings in Python are sort of like the “service needed” light on a car; the user might know that something is wrong, but only a qualified repairperson will know what to do. Developers should avoid showing warnings to end users.)

You can also imagine a situation in which some warnings are more important than others. You could certainly devise a scheme in which the program would “print” warnings, and that the warning’s first character would indicate its severity… but why work in this way, when Python has a complete object system, as well as complex data types at its disposal?

Moreover, there are some situations in which a user might not want to dismiss warnings. Maybe I’m running a very sensitive type of program in production, and I’d rather have the program exit prematurely than continue in a potentially ambiguous situation.

Python’s warning system takes all of this into account:

  • It treats the warnings as a separate type of output, so that we cannot confuse it with either exceptions or the program’s printed text,
  • It lets us indicate what kind of warning we’re sending the user,
  • It lets the user indicate what should happen with different types of warnings, with some causing fatal errors, others displaying their messages on the screen, and still others being ignored,
  • It lets programmers develop their own, new kinds of warnings.

Not every program needs to have or use warnings. But your program would like to scold a user for the way that a module was loaded or a function was invoked, then Python’s warnings system provides just what you want.

Warning the user

Let’s say that you want to warn the user about something. You can do so by importing the “warnings” module, and then by using “warnings.warn” to tell them what’s wrong:

import warnings

print('Hello')
warnings.warn('I am a warning!')
print('Goodbye')

What happens when I run the above code (in a file I called “warnings1.py”? The following:

Hello
./warnings1.py:6: UserWarning: I am a warning!
warnings.warn('I am a warning!')
Goodbye

In other words, the above was all written to my terminal screen. The three lines were all printed in sequence, so it’s not as if the warning was printed at a separate phase (e.g., compilation) of the program. But there is a clear difference between the plain ol’ “print” statements and the output I got from the warning.

First of all, we’re told in which file, and on which line, the warning took place. In a tiny and trivial example like this one, that seems like overkill. But if you have a large application consisting of many different files, then it’ll certainly be nice to know what code generated the warning.

We’re also told that this was a “UserWarning” — one of the types of warnings that we can generate. Just as different types of exceptions allow us to selectively trap them, different types of warnings allow us to handle them differently.

But there’s also something hidden from this output: The “print” statements and my “warnings.warn” statement actually sent their output to two different places. As I wrote above, “print” normally writes to “standard output,” aka “sys.stdout”, typically connected to the user’s terminal window. But “warnings.warn” normally writes to “standard error,” aka “sys.stderr”. The problem is that by default, “sys.stdout” and “sys.stderr” both write to the same place, namely the user’s terminal.

But look at what happens if I redirect program output to a file:

$ ./warnings1.py > output.txt

./warnings1.py:6: UserWarning: I am a warning!
warnings.warn('I am a warning!')

I told my Unix shell that I wanted to run “warnings1.py”, and that all output should be placed in “output.txt”, rather than displayed on the screen. But I didn’t really say “all output.” Rather, by using the “>”, I only redirected output sent to “sys.stdout”. Warnings, which are sent to “sys.stderr”, are still displayed. This is normally considered to be a good thing, ensuring that even if you’ve redirected output to a file, you’ll still be able to see warnings and other errors. So while sys.stdout and sys.stderr both go to the same destination by default, we can see the advantage of being able to separate them.

Different types of warnings

Let’s say that I’m maintaining a library that has been around for some time. The library has a function that works, but which is a bit old-fashioned, and doesn’t support modern use cases. It’s a pain for me, as the library maintainer, to support two versions of the function — the old one and the new one.

I can declare, in documentation and social media, that a new version (3.0) of my library will be out next year, and that this new version won’t support the old version of the function. But we all know that programmers don’t tend to read documentation. So I would prefer to shock users a bit, telling them that while the old function version still works, they should start to move to the newer version.

How can I do that? With warnings, of course! Here’s an example of how that might look:

import warnings

def hello(name):
    warnings.warn('"hello" will be removed in version 3.0')
    return f'Hello, {name}!'

def newhello(name, decoration=''):
    return f'Hello, {decoration}{name}{decoration}!'

print(hello('world'))
print(newhello('world', decoration='*'))

Now, any time a user runs the function “hello”, they’ll get a warning. Moreover, because this warnings goes to standard error (and not standard out), it won’t be mixed with the normal output. Here’s the output from the above:

$ ./warnings2.py

./warnings2.py:7: UserWarning: "hello" will be removed in version 3.0
warnings.warn('"hello" will be removed in version 3.0')
Hello, world!
Hello, *world*!

But it gets better than that: Maybe we want to separate our normal, run-of-the-mill warnings from other types of warnings. For example, we might have a number of functions that are deprecated. To handle this, the “warnings.warn” function supports an optional, second argument — a category of warning. For example, we can use DeprecationWarning:

import warnings

def hello(name):
    warnings.warn('"hello" will be removed in version 3.0',
                   DeprecationWarning)
    return f'Hello, {name}!'

def newhello(name, decoration=''):
    return f'Hello, {decoration}{name}{decoration}!'

print(hello('world'))
print(newhello('world', decoration='*'))

We don’t have to “import” DeprecationWarning, or any other of the standard warning types, because they’re already imported automatically, into the “builtins” namespace that’s always available to a Python program. And indeed, there are a number of such warning classes that we can use, including UserWarning (the default), DeprecationWarning (which we used here), SyntaxWarning, and UnicodeWarning. You can use whichever one of these you deem most appropriate.

You might have noticed that these warning categories are precisely the same classes as we saw earlier, when we were looking through Python’s built-in exception hierarchy. And indeed, this is how those classes are meant to be used, passed as a second argument to “warnings.warn”.

Simple filtering

Let’s say that you are using a bunch of old functions, and that each of those functions are going to warn you that you should really switch to their newer alternatives. You’ll probably get a bit annoyed if every time you run your program, you get a bunch of warnings. The warnings are there to inform you that you should upgrade… but sometimes, the warnings are more annoying than helpful.

In such a case, you might want to filter out some of the warnings. Now, “filtering” is a very general term that’s used by the warnings system. It basically lets you say, “When a warning that matches certain criteria fires, do X with it” — where X can be a variety of things.

The simplest filter is “warnings.simplefilter”, and the simplest way to invoke it is with a single string argument. That argument tells the warning system what to do if it encounters a warning:

  • “default” — display a warning the first time it is encountered
  • “error” — turn the warning into an exception
  • “ignore” — ignore the warning
  • “always” — always show the warning, even if it was displayed before
  • “module” — show the warning once per module
  • “once” — show the warning only once, throughout the program

For example, if I want to ignore all warnings, I could say:

warnings.simplefilter('ignore')

And then if I have code that reads:

print('Hello')
warnings.warn('The end is nigh!')
print('Goodbye')

We’ll see output that looks like:

Hello
Goodbye

As you can see, the warning disappeared entirely, thanks to the use of “ignore”.

What happens if we take the other extreme, namely we turn the warnings into exceptions?

warnings.simplefilter('error')
warnings.warn('Yikes!')

Sure enough, we then get an exception:

UserWarning: Yikes!

As you can see, we got a UserWarning exception. We can use “try” and “except” on these, trapping them if we want… although I must admit that it seems weird to me to turn warnings into exceptions, only to trap them. (I’m sure that there is a use case for this, though.)

More specific filtering

I mentioned that “simplefilter” takes a mandatory argument, and we’ve seen what those can be. But it turns out that “simplefilter” takes several additional, optional arguments that can be used to specify what happens when a warning is issued.

For example, let’s say I want to ignore UserWarning but turn DeprecationWarning into an exception. I can say:

import warnings

warnings.simplefilter('ignore', UserWarning)
warnings.simplefilter('error', DeprecationWarning)

warnings.warn('bad news!')  # ignored

warnings.warn('very bad news', category=DeprecationWarning)

This code results in the following output:

Traceback (most recent call last):
File "", line 1, in
DeprecationWarning: very bad news

In other words, we successfully ignored one type of warning, while turning another into an exception — which, like all exceptions, is fatal if ignored.

The “simplefilter” function takes four arguments, all but the first being optional

What else can you do?

The warning system handles a very wide variety of cases, and can be configured in numerous ways. Among other things, you can:

  • Define warning filters from the command line, using the -W flag
  • Set multiple filters, each handling a different case
  • Specify the message and module that should be filtered, either as a string or as a regular expression
  • Create your own warnings, as subclasses of existing warning classes
  • Capture warnings with Python’s logging module, rather than printing the output to sys.stderr.
  • Have output go to a callable (i.e., function or class) of your choice, rather than to sys.stderr, for fancier processing.

Interested in learning more?

Improve your Python: WPE A2 (“Functions for beginners”) starts next week!

If you’ve programmed in Python for even a short amount of time, then you’ve probably written a fair number of functions.

But many newcomers to Python don’t understand just how useful and powerful functions can be:

  • We can treat functions as nouns, not just as verbs — passing them as arguments, and storing them in variables
  • We can take define many different types of parameters, each with its own semantics and advantages
  • We can use functions written by other people, in external modules — those that come with Python’s standard library, and those we download from PyPI

These techniques aren’t just interesting. They can help you to write better, larger, and more sophisticated Python applications.

If you’re looking for a better Python job — in machine learning, Web development, analytics, or devops — then this will certainly help you to improve your understanding and fluency.

I’m starting a new cohort of Weekly Python Exercise on May 5th, one that’s all about functions and modules, aimed at beginners with Python — those with less than one year of experience with the language. Over 15 weeks, you’ll become a more fluent Python programmer, doing more with less code and becoming more confident in what you’re doing.

The course, like all WPE cohorts, has a simple formula:

  • On Tuesday, you get a new question, along with “pytest” tests
  • On the following Monday, you get the solution and a full explanation
  • In between, you chat with others in our private forum, and discuss possible answers
  • Once per month, I hold live office hours, where we can discuss questions you might have about the exercises — or any other Python questions you have.

WPE is now in its fourth year, with many hundreds of satisfied students. I’m confident that if you’ve been using Python for less than a year, this cohort of WPE will help you to improve your knowledge of functions.

Join Weekly Python Exercise A2: Functions for beginners, starting on May 5th. Get a better job — or just do your current job better.

Questions or comments? E-mail me at reuven@lerner.co.il, or on Twitter as @reuvenmlerner.

And don’t forget that I give discounts to (1) students, (2) seniors/pensioners/retirees, (3) anyone living outside of the world’s 30 richest countries, and (4) anyone affected adversely by the coronavirus/covid-19. Just e-mail me at reuven@lerner.co.il if any of these applies to you.

>