Python 3.12.0 alpha 6 released

I’m pleased to announce the release of Python 3.12 alpha 6.

https://www.python.org/downloads/release/python-3120a6/

This is an early developer preview of Python 3.12.

Major new features of the 3.12 series, compared to 3.11

Python 3.12 is still in development. This release, 3.12.0a6 is the sixth of seven planned alpha releases.
Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.
During the alpha phase, features may be added up until the start of the beta phase (2023-05-08) and, if necessary, may be modified or deleted up until the release candidate phase (2023-07-31). Please keep in mind that this is a preview release and its use is not recommended for production environments.
Many new features for Python 3.12 are still being planned and written. Among the new major new features and changes so far:
  • Even more improved error messages. More exceptions potentially caused by typos now make suggestions to the user.
  • Support for the Linux perf profiler to report Python function names in traces.
  • The deprecated wstr and wstr_length members of the C implementation of unicode objects were removed, per PEP 623.
  • In the unittest module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).
  • The deprecated smtpd and distutils modules have been removed (see PEP 594 and PEP 632). The setuptools package (installed by default in virtualenvs and many other places) continues to provide the distutils module.
  • A number of other old, broken and deprecated functions, classes and methods have been removed.
  • Invalid backslash escape sequences in strings now warn with SyntaxWarning instead of DeprecationWarning, making them more visible. (They will become syntax errors in the future.)
  • The internal representation of integers has changed in preparation for performance enhancements. (This should not affect most users as it is an internal detail, but it may cause problems for Cython-generated code.)
  • (Hey, fellow core developer, if a feature you find important is missing from this list, let Thomas know.)
For more details on the changes in Python 3.12, see What’s New In Python 3.12. The next pre-release of Python 3.12 will be 3.12.0a7, currently scheduled for 2023-04-03.

More resources

And now for something completely different

Let me not to the marriage of true minds
Admit impediments. Love is not love
Which alters when it alteration finds,
Or bends with the remover to remove:
O, no! it is an ever-fixed mark,
That looks on tempests and is never shaken;
It is the star to every wandering bark,
Whose worth’s unknown, although his height be taken.
Love’s not Time’s fool, though rosy lips and cheeks
Within his bending sickle’s compass come;
Love alters not with his brief hours and weeks,
But bears it out even to the edge of doom.
If this be error, and upon me prov’d,
I never writ, nor no man ever lov’d.
Sonnet 116, by William Shakespeare.

Enjoy the new releases

Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.
Your release team,
Thomas Wouters
Ned Deily
Steve Dower

 

Related Articles

PyMC Open Source Development

In this episode of Open Source Directions, we were joined by Thomas Wiecki once again who talked about the work being done with PyMC. PyMC3 is a Python package for Bayesian statistical modeling and Probabilistic Machine Learning focusing on advanced Markov chain Monte Carlo (MCMC) and variational inference (VI) algorithms. Its flexibility and extensibility make it applicable to a large suite of problems.

Interfaces for Explaining Transformer Language Models

Interfaces for exploring transformer language models by looking at input saliency and neuron activation. Explorable #1: Input saliency of a list of countries generated by a language model Tap or hover over the output tokens: Explorable #2: Neuron activation analysis reveals four groups of neurons, each is associated with generating a certain type of token Tap or hover over the sparklines on the left to isolate a certain factor: The Transformer architecture has been powering a number of the recent advances in NLP. A breakdown of this architecture is provided here . Pre-trained language models based on the architecture, in both its auto-regressive (models that use their own output as input to next time-steps and that process tokens from left-to-right, like GPT2) and denoising (models trained by corrupting/masking the input and that process tokens bidirectionally, like BERT) variants continue to push the envelope in various tasks in NLP and, more recently, in computer vision. Our understanding of why these models work so well, however, still lags behind these developments. This exposition series continues the pursuit to interpret and visualize the inner-workings of transformer-based language models. We illustrate how some key interpretability methods apply to transformer-based language models. This article focuses on auto-regressive models, but these methods are applicable to other architectures and tasks as well. This is the first article in the series. In it, we present explorables and visualizations aiding the intuition of: Input Saliency methods that score input tokens importance to generating a token. Neuron Activations and how individual and groups of model neurons spike in response to inputs and to produce outputs. The next article addresses Hidden State Evolution across the layers of the model and what it may tell us about each layer’s role.