Welcome!#
You’ve landed on the documentation pages for the Jupyter Server Project. Some other pages you may have been looking for:
Jupyter Server Github Repo, the source code we describe in this code.
Jupyter Notebook Github Repo , the source code for the classic Notebook.
JupyterLab Github Repo, the JupyterLab server which runs on the Jupyter Server.
Introduction#
Jupyter Server is the backend that provides the core services, APIs, and REST endpoints for Jupyter web applications.
Note
Jupyter Server is a replacement for the Tornado Web Server in Jupyter Notebook. Jupyter web applications should move to using Jupyter Server. For help, see the Migrating from Notebook Server page.
Applications#
Jupyter Server extensions can use the framework and services provided by Jupyter Server to create applications and services.
Examples of Jupyter Server extensions include:
- Jupyter Lab
JupyterLab computational environment.
- Jupyter Resource Usage
Jupyter Notebook Extension for monitoring your own resource usage.
- Jupyter Scheduler
Run Jupyter notebooks as jobs.
- jupyter-collaboration
A Jupyter Server Extension Providing Support for Y Documents.
- NbClassic
Jupyter notebook as a Jupyter Server extension.
- Cylc UI Server
A Jupyter Server extension that serves the cylc-ui web application for monitoring and controlling Cylc workflows.
For more information on extensions, see Server Extensions.
Who’s this for?#
The Jupyter Server is a highly technical piece of the Jupyter Stack, so we’ve separated documentation to help specific personas:
Users: people using Jupyter web applications.
Operators: people deploying or serving Jupyter web applications to others.
Developers: people writing Jupyter Server extensions and web applications.
Contributors: people contributing directly to the Jupyter Server library.
If you finds gaps in our documentation, please open an issue (or better, a pull request) on the Jupyter Server Github repo.
Table of Contents#
Documentation for Users#
The Jupyter Server is a highly technical piece of the Jupyter Stack, so users probably won’t import or install this library directly. These pages are to meant to help you in case you run into issues or bugs.
Installation#
Most Jupyter users will never need to install Jupyter Server manually. Jupyter Web applications will include the (correct version) of Jupyter Server as a dependency. It’s best to let those applications handle installation, because they may require a specific version of Jupyter Server.
If you decide to install manually, run:
pip install jupyter_server
You upgrade or downgrade to a specific version of Jupyter Server by adding an operator to the command above:
pip install jupyter_server==1.0
Configuring a Jupyter Server#
Using a Jupyter config file#
By default, Jupyter Server looks for server-specific configuration in a jupyter_server_config
file located on a Jupyter path. To list the paths where Jupyter Server will look, run:
$ jupyter --paths
config:
/Users/username/.jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/Users/username/Library/Jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/Users/username/Library/Jupyter/runtime
The paths under config
are listed in order of precedence. If the same trait is listed in multiple places, it will be set to the value from the file with the highest precedence.
Jupyter Server uses IPython’s traitlets system for configuration. Traits can be
listed in a Python or JSON config file. You can quickly create a
jupyter_server_config.py
file in the .jupyter
directory, with all the
defaults commented out, use the following command:
$ jupyter server --generate-config
In Python files, these traits will have the prefix c.ServerApp
. For example, your configuration file could look like:
# inside a jupyter_server_config.py file.
c.ServerApp.port = 9999
The same configuration in JSON, looks like:
{
"ServerApp": {
"port": 9999
}
}
Using the CLI#
Alternatively, you can configure Jupyter Server when launching from the command line using CLI args. Prefix each argument with --ServerApp
like so:
$ jupyter server --ServerApp.port=9999
Full configuration list#
See the full list of configuration options for the server here.
Launching a bare Jupyter Server#
Most of the time, you won’t need to start the Jupyter Server directly. Jupyter Web Applications (like Jupyter Notebook, Jupyterlab, Voila, etc.) come with their own entry points that start a server automatically.
Sometimes, though, it can be useful to start Jupyter Server directly when you want to run multiple Jupyter Web applications at the same time. For more details, see the Managing multiple extensions page. If these extensions are enabled, you can simple run the following:
> jupyter server
[I 2020-03-20 15:48:20.903 ServerApp] Serving notebooks from local directory: /Users/username/home
[I 2020-03-20 15:48:20.903 ServerApp] Jupyter Server 1.0.0 is running at:
[I 2020-03-20 15:48:20.903 ServerApp] http://localhost:8888/?token=<...>
[I 2020-03-20 15:48:20.903 ServerApp] or http://127.0.0.1:8888/?token=<...>
[I 2020-03-20 15:48:20.903 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 2020-03-20 15:48:20.903 ServerApp] Welcome to Project Jupyter! Explore the various tools available and their corresponding documentation. If you are interested in contributing to the platform, please visit the communityresources section at https://jupyter.org/community.html.
[C 2020-03-20 15:48:20.907 ServerApp]
To access the server, open this file in a browser:
file:///Users/username/jpserver-###-open.html
Or copy and paste one of these URLs:
http://localhost:8888/?token=<...>
or http://127.0.0.1:8888/?token=<...>
Getting Help#
If you run into any issues or bugs, please open an issue on Github.
We’d also love to have you come by our Team Meetings.
Documentation for Operators#
These pages are targeted at people using, configuring, and/or deploying multiple Jupyter Web Application with Jupyter Server.
Managing multiple extensions#
One of the major benefits of Jupyter Server is that you can run serve multiple Jupyter frontend applications above the same Tornado web server. That’s because every Jupyter frontend application is now a server extension. When you run a Jupyter Server with multiple extensions enabled, each extension appends its own set of handlers and static assets to the server.
Listing extensions#
When you install a Jupyter Server extension, it should automatically add itself to your list of enabled extensions. You can see a list of installed extensions by calling:
> jupyter server extension list
config dir: /Users/username/etc/jupyter
myextension enabled
- Validating myextension...
myextension OK
Enabling/disabling extensions#
You enable/disable an extension using the following commands:
> jupyter server extension enable myextension
Enabling: myextension
- Validating myextension...
myextension OK
- Extension successfully enabled.
> jupyter server extension disable myextension
Disabling: jupyter_home
- Validating jupyter_home...
jupyter_home OK
- Extension successfully disabled.
Running an extensions from its entrypoint#
Extensions that are also Jupyter applications (i.e. Notebook, JupyterLab, Voila, etc.) can be launched from a CLI entrypoint. For example, launch Jupyter Notebook using:
> jupyter notebook
Jupyter Server will automatically start a server and the browser will be routed to Jupyter Notebook’s default URL (typically, /tree
).
Other enabled extension will still be available to the user. The entrypoint simply offers a more direct (backwards compatible) launching mechanism.
Launching a server with multiple extensions#
If multiple extensions are enabled, a Jupyter Server can be launched directly:
> jupyter server
[I 2020-03-23 15:44:53.290 ServerApp] Serving notebooks from local directory: /Users/username/path
[I 2020-03-23 15:44:53.290 ServerApp] Jupyter Server 0.3.0.dev is running at:
[I 2020-03-23 15:44:53.290 ServerApp] http://localhost:8888/?token=<...>
[I 2020-03-23 15:44:53.290 ServerApp] or http://127.0.0.1:8888/?token=<...>
[I 2020-03-23 15:44:53.290 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 2020-03-23 15:44:53.290 ServerApp] Welcome to Project Jupyter! Explore the various tools available and their corresponding documentation. If you are interested in contributing to the platform, please visit the communityresources section at https://jupyter.org/community.html.
[C 2020-03-23 15:44:53.296 ServerApp]
To access the server, open this file in a browser:
file:///Users/username/pathjpserver-####-open.html
Or copy and paste one of these URLs:
http://localhost:8888/?token=<...>
or http://127.0.0.1:8888/?token=<...>
Extensions can also be enabled manually from the Jupyter Server entrypoint using the jpserver_extensions
trait:
> jupyter server --ServerApp.jpserver_extensions="myextension=True"
Configuring Extensions#
Some Jupyter Server extensions are also configurable applications. There are two ways to configure such extensions: i) pass arguments to the extension’s entry point or ii) list configurable options in a Jupyter config file.
Jupyter Server looks for an extension’s config file in a set of specific paths. Use the jupyter
entry point to list these paths:
> jupyter --paths
config:
/Users/username/.jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/Users/username/Library/Jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/Users/username/Library/Jupyter/runtime
Extension config from file#
Jupyter Server expects the file to be named after the extension’s name like so: jupyter_{name}_config
. For example, the Jupyter Notebook’s config file is jupyter_notebook_config
.
Configuration files can be Python or JSON files.
In Python config files, each trait will be prefixed with c.
that links the trait to the config loader. For example, Jupyter Notebook config might look like:
# jupyter_notebook_config.py
c.NotebookApp.mathjax_enabled = False
A Jupyter Server will automatically load config for each enabled extension. You can configure each extension by creating their corresponding Jupyter config file.
Extension config on the command line#
Server extension applications can also be configured from the command line, and
multiple extension can be configured at the same time. Simply pass the traits
(with their appropriate prefix) to the jupyter server
entrypoint, e.g.:
> jupyter server --ServerApp.port=9999 --MyExtension1.trait=False --MyExtension2.trait=True
This will also work with any extension entrypoints that allow other extensions to run side-by-side, e.g.:
> jupyter myextension --ServerApp.port=9999 --MyExtension1.trait=False --MyExtension2.trait=True
Migrating from Notebook Server#
To migrate from notebook server to plain jupyter server, follow these steps:
Rename your
jupyter_notebook_config.py
file tojupyter_server_config.py
.Rename all
c.NotebookApp
traits toc.ServerApp
.
For example if you have the following jupyter_notebook_config.py
.
c.NotebookApp.allow_credentials = False
c.NotebookApp.port = 8889
c.NotebookApp.password_required = True
You will have to create the following jupyter_server_config.py
file.
c.ServerApp.allow_credentials = False
c.ServerApp.port = 8889
c.ServerApp.password_required = True
Running Jupyter Notebook on Jupyter Server#
If you want to switch to Jupyter Server, but you still want to serve Jupyter Notebook to users, you can try NBClassic.
NBClassic is a Jupyter Server extension that serves the Notebook frontend (i.e. all static assets) on top of Jupyter Server. It even loads Jupyter Notebook’s config files.
Warning
NBClassic will only work for a limited time. Jupyter Server is likely to evolve beyond a point where Jupyter Notebook frontend will no longer work with the underlying server. Consider switching to JupyterLab or nteract where there is active development happening.
Running a public Jupyter Server#
The Jupyter Server uses a two-process kernel architecture based on ZeroMQ, as well as Tornado for serving HTTP requests.
Note
By default, Jupyter Server runs locally at 127.0.0.1:8888
and is accessible only from localhost
. You may access the
server from the browser using http://127.0.0.1:8888
.
This document describes how you can secure a Jupyter server and how to run it on a public interface.
Important
This is not the multi-user server you are looking for. This document describes how you can run a public server with a single user. This should only be done by someone who wants remote access to their personal machine. Even so, doing this requires a thorough understanding of the set-ups limitations and security implications. If you allow multiple users to access a Jupyter server as it is described in this document, their commands may collide, clobber and overwrite each other.
If you want a multi-user server, the official solution is JupyterHub. To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your users on a network. This may run over the public internet, but doing so introduces additional security concerns.
Securing a Jupyter server#
You can protect your Jupyter server with a simple single password. As of
notebook 5.0 this can be done automatically. To set up a password manually you
can configure the ServerApp.password
setting in
jupyter_server_config.py
.
Prerequisite: A Jupyter server configuration file#
Check to see if you have a Jupyter server configuration file,
jupyter_server_config.py
. The default location for this file
is your Jupyter folder located in your home directory:
Windows:
C:\Users\USERNAME\.jupyter\jupyter_server_config.py
OS X:
/Users/USERNAME/.jupyter/jupyter_server_config.py
Linux:
/home/USERNAME/.jupyter/jupyter_server_config.py
If you don’t already have a Jupyter folder, or if your Jupyter folder doesn’t contain a Jupyter server configuration file, run the following command:
$ jupyter server --generate-config
This command will create the Jupyter folder if necessary, and create a Jupyter
server configuration file, jupyter_server_config.py
, in this folder.
Automatic Password setup#
As of notebook 5.3, the first time you log-in using a token, the server should give you the opportunity to setup a password from the user interface.
You will be presented with a form asking for the current token, as well as
your new password; enter both and click on Login and setup new password
.
Next time you need to log in you’ll be able to use the new password instead of the login token, otherwise follow the procedure to set a password from the command line.
The ability to change the password at first login time may be disabled by
integrations by setting the --ServerApp.allow_password_change=False
Starting at notebook version 5.0, you can enter and store a password for your
server with a single command. jupyter server password will
prompt you for your password and record the hashed password in your
jupyter_server_config.json
.
$ jupyter server password
Enter password: ****
Verify password: ****
[JupyterPasswordApp] Wrote hashed password to /Users/you/.jupyter/jupyter_server_config.json
This can be used to reset a lost password; or if you believe your credentials have been leaked and desire to change your password. Changing your password will invalidate all logged-in sessions after a server restart.
Preparing a hashed password#
You can prepare a hashed password manually, using the function
jupyter_server.auth.passwd()
:
>>> from jupyter_server.auth import passwd
>>> passwd()
Enter password:
Verify password:
'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'
Caution
passwd()
when called with no arguments
will prompt you to enter and verify your password such as
in the above code snippet. Although the function can also
be passed a string as an argument such as passwd('mypassword')
, please
do not pass a string as an argument inside an IPython session, as it
will be saved in your input history.
Adding hashed password to your notebook configuration file#
You can then add the hashed password to your
jupyter_server_config.py
. The default location for this file
jupyter_server_config.py
is in your Jupyter folder in your home
directory, ~/.jupyter
, e.g.:
c.ServerApp.password = u'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'
Automatic password setup will store the hash in jupyter_server_config.json
while this method stores the hash in jupyter_server_config.py
. The .json
configuration options take precedence over the .py
one, thus the manual
password may not take effect if the Json file has a password set.
Using SSL for encrypted communication#
When using a password, it is a good idea to also use SSL with a web certificate, so that your hashed password is not sent unencrypted by your browser.
Important
Web security is rapidly changing and evolving. We provide this document as a convenience to the user, and recommend that the user keep current on changes that may impact security, such as new releases of OpenSSL. The Open Web Application Security Project (OWASP) website is a good resource on general security issues and web practices.
You can start the notebook to communicate via a secure protocol mode by setting
the certfile
option to your self-signed certificate, i.e. mycert.pem
,
with the command:
$ jupyter server --certfile=mycert.pem --keyfile mykey.key
Tip
A self-signed certificate can be generated with openssl
. For example,
the following command will create a certificate valid for 365 days with
both the key and certificate data written to the same file:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mykey.key -out mycert.pem
When starting the notebook server, your browser may warn that your self-signed certificate is insecure or unrecognized. If you wish to have a fully compliant self-signed certificate that will not raise warnings, it is possible (but rather involved) to create one, as explained in detail in this tutorial. Alternatively, you may use Let’s Encrypt to acquire a free SSL certificate and follow the steps in Using Let’s Encrypt to set up a public server.
Running a public notebook server#
If you want to access your notebook server remotely via a web browser, you can do so by running a public notebook server. For optimal security when running a public notebook server, you should first secure the server with a password and SSL/HTTPS as described in Securing a Jupyter server.
Start by creating a certificate file and a hashed password, as explained in Securing a Jupyter server.
If you don’t already have one, create a config file for the notebook using the following command line:
$ jupyter server --generate-config
In the ~/.jupyter
directory, edit the notebook config file,
jupyter_server_config.py
. By default, the notebook config file has
all fields commented out. The minimum set of configuration options that
you should uncomment and edit in jupyter_server_config.py
is the
following:
# Set options for certfile, ip, password, and toggle off
# browser auto-opening
c.ServerApp.certfile = u'/absolute/path/to/your/certificate/mycert.pem'
c.ServerApp.keyfile = u'/absolute/path/to/your/certificate/mykey.key'
# Set ip to '*' to bind on all interfaces (ips) for the public server
c.ServerApp.ip = '*'
c.ServerApp.password = u'sha1:bcd259ccf...<your hashed password here>'
c.ServerApp.open_browser = False
# It is a good idea to set a known, fixed port for server access
c.ServerApp.port = 9999
You can then start the notebook using the jupyter server
command.
Using Let’s Encrypt#
Let’s Encrypt provides free SSL/TLS certificates. You can also set up a public server using a Let’s Encrypt certificate.
Running a public notebook server will be similar when using a Let’s Encrypt certificate with a few configuration changes. Here are the steps:
Create a Let’s Encrypt certificate.
Use Preparing a hashed password to create one.
If you don’t already have config file for the notebook, create one using the following command:
$ jupyter server --generate-config
4. In the ~/.jupyter
directory, edit the notebook config file,
jupyter_server_config.py
. By default, the notebook config file has
all fields commented out. The minimum set of configuration options that
you should to uncomment and edit in jupyter_server_config.py
is the
following:
# Set options for certfile, ip, password, and toggle off
# browser auto-opening
c.ServerApp.certfile = u'/absolute/path/to/your/certificate/fullchain.pem'
c.ServerApp.keyfile = u'/absolute/path/to/your/certificate/privkey.pem'
# Set ip to '*' to bind on all interfaces (ips) for the public server
c.ServerApp.ip = '*'
c.ServerApp.password = u'sha1:bcd259ccf...<your hashed password here>'
c.ServerApp.open_browser = False
# It is a good idea to set a known, fixed port for server access
c.ServerApp.port = 9999
You can then start the notebook using the jupyter server
command.
Important
Use ‘https’.
Keep in mind that when you enable SSL support, you must access the
notebook server over https://
, not over plain http://
. The startup
message from the server prints a reminder in the console, but it is easy
to overlook this detail and think the server is for some reason
non-responsive.
When using SSL, always access the notebook server with ‘https://’.
You may now access the public server by pointing your browser to
https://your.host.com:9999
where your.host.com
is your public server’s
domain.
Firewall Setup#
To function correctly, the firewall on the computer running the jupyter
notebook server must be configured to allow connections from client
machines on the access port c.ServerApp.port
set in
jupyter_server_config.py
to allow connections to the
web interface. The firewall must also allow connections from
127.0.0.1 (localhost) on ports from 49152 to 65535.
These ports are used by the server to communicate with the notebook kernels.
The kernel communication ports are chosen randomly by ZeroMQ, and may require
multiple connections per kernel, so a large range of ports must be accessible.
Running the notebook with a customized URL prefix#
The notebook dashboard, which is the landing page with an overview
of the notebooks in your working directory, is typically found and accessed
at the default URL http://localhost:8888/
.
If you prefer to customize the URL prefix for the notebook dashboard, you can
do so through modifying jupyter_server_config.py
. For example, if you
prefer that the notebook dashboard be located with a sub-directory that
contains other ipython files, e.g. http://localhost:8888/ipython/
,
you can do so with configuration options like the following (see above for
instructions about modifying jupyter_server_config.py
):
c.ServerApp.base_url = "/ipython/"
Embedding the notebook in another website#
Sometimes you may want to embed the notebook somewhere on your website,
e.g. in an IFrame. To do this, you may need to override the
Content-Security-Policy to allow embedding. Assuming your website is at
https://mywebsite.example.com
, you can embed the notebook on your website
with the following configuration setting in
jupyter_server_config.py
:
c.ServerApp.tornado_settings = {
"headers": {
"Content-Security-Policy": "frame-ancestors https://mywebsite.example.com 'self' "
}
}
Using a gateway server for kernel management#
You are now able to redirect the management of your kernels to a Gateway Server (i.e., Jupyter Kernel Gateway or Jupyter Enterprise Gateway) simply by specifying a Gateway url via the following command-line option:
$ jupyter notebook --gateway-url=http://my-gateway-server:8888
the environment:
JUPYTER_GATEWAY_URL=http://my-gateway-server:8888
or in jupyter_notebook_config.py
:
c.GatewayClient.url = "http://my-gateway-server:8888"
When provided, all kernel specifications will be retrieved from the specified Gateway server and all kernels will be managed by that server. This option enables the ability to target kernel processes against managed clusters while allowing for the notebook’s management to remain local to the Notebook server.
Known issues#
Proxies#
When behind a proxy, especially if your system or browser is set to autodetect the proxy, the notebook web application might fail to connect to the server’s websockets, and present you with a warning at startup. In this case, you need to configure your system not to use the proxy for the server’s address.
For example, in Firefox, go to the Preferences panel, Advanced section, Network tab, click ‘Settings…’, and add the address of the Jupyter server to the ‘No proxy for’ field.
Content-Security-Policy (CSP)#
Certain security guidelines
recommend that servers use a Content-Security-Policy (CSP) header to prevent
cross-site scripting vulnerabilities, specifically limiting to default-src:
https:
when possible. This directive causes two problems with Jupyter.
First, it disables execution of inline javascript code, which is used
extensively by Jupyter. Second, it limits communication to the https scheme,
and prevents WebSockets from working because they communicate via the wss
scheme (or ws for insecure communication). Jupyter uses WebSockets for
interacting with kernels, so when you visit a server with such a CSP, your
browser will block attempts to use wss, which will cause you to see
“Connection failed” messages from jupyter notebooks, or simply no response
from jupyter terminals. By looking in your browser’s javascript console, you
can see any error messages that will explain what is failing.
To avoid these problem, you need to add 'unsafe-inline'
and connect-src
https: wss:
to your CSP header, at least for pages served by jupyter. (That
is, you can leave your CSP unchanged for other parts of your website.) Note
that multiple CSP headers are allowed, but successive CSP headers can only
restrict the policy; they cannot loosen it. For example, if your server sends
both of these headers
Content-Security-Policy “default-src https: ‘unsafe-inline’” Content-Security-Policy “connect-src https: wss:”
the first policy will already eliminate wss connections, so the second has no effect. Therefore, you can’t simply add the second header; you have to actually modify your CSP header to look more like this:
Content-Security-Policy “default-src https: ‘unsafe-inline’; connect-src https: wss:”
Docker CMD#
Using jupyter server
as a
Docker CMD results in
kernels repeatedly crashing, likely due to a lack of PID reaping.
To avoid this, use the tini init
as your
Dockerfile ENTRYPOINT
:
# Add Tini. Tini operates as a process subreaper for jupyter. This prevents
# kernel crashes.
ENV TINI_VERSION v0.6.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini
RUN chmod +x /usr/bin/tini
ENTRYPOINT ["/usr/bin/tini", "--"]
EXPOSE 8888
CMD ["jupyter", "server", "--port=8888", "--no-browser", "--ip=0.0.0.0"]
Security in the Jupyter Server#
Since access to the Jupyter Server means access to running arbitrary code, it is important to restrict access to the server. For this reason, Jupyter Server uses a token-based authentication that is on by default.
Note
If you enable a password for your server, token authentication is not enabled by default.
When token authentication is enabled, the server uses a token to authenticate requests. This token can be provided to login to the server in three ways:
in the
Authorization
header, e.g.:Authorization: token abcdef...
In a URL parameter, e.g.:
https://my-server/tree/?token=abcdef...
In the password field of the login form that will be shown to you if you are not logged in.
When you start a Jupyter server with token authentication enabled (default), a token is generated to use for authentication. This token is logged to the terminal, so that you can copy/paste the URL into your browser:
[I 11:59:16.597 ServerApp] The Jupyter Server is running at:
http://localhost:8888/?token=c8de56fa4deed24899803e93c227592aef6538f93025fe01
If the Jupyter server is going to open your browser automatically, an additional token is generated for launching the browser. This additional token can be used only once, and is used to set a cookie for your browser once it connects. After your browser has made its first request with this one-time-token, the token is discarded and a cookie is set in your browser.
At any later time, you can see the tokens and URLs for all of your running servers with jupyter server list:
$ jupyter server list
Currently running servers:
http://localhost:8888/?token=abc... :: /home/you/notebooks
https://0.0.0.0:9999/?token=123... :: /tmp/public
http://localhost:8889/ :: /tmp/has-password
For servers with token-authentication enabled, the URL in the above listing will include the token, so you can copy and paste that URL into your browser to login. If a server has no token (e.g. it has a password or has authentication disabled), the URL will not include the token argument. Once you have visited this URL, a cookie will be set in your browser and you won’t need to use the token again, unless you switch browsers, clear your cookies, or start a Jupyter server on a new port.
Alternatives to token authentication#
If a generated token doesn’t work well for you,
you can set a password for your server.
jupyter server password will prompt you for a password,
and store the hashed password in your jupyter_server_config.json
.
It is possible disable authentication altogether by setting the token and password to empty strings, but this is NOT RECOMMENDED, unless authentication or access restrictions are handled at a different layer in your web application:
c.ServerApp.token = ""
c.ServerApp.password = ""
Security in notebook documents#
As Jupyter Server become more popular for sharing and collaboration, the potential for malicious people to attempt to exploit the notebook for their nefarious purposes increases. IPython 2.0 introduced a security model to prevent execution of untrusted code without explicit user input.
The problem#
The whole point of Jupyter is arbitrary code execution. We have no desire to limit what can be done with a notebook, which would negatively impact its utility.
Unlike other programs, a Jupyter notebook document includes output. Unlike other documents, that output exists in a context that can execute code (via Javascript).
The security problem we need to solve is that no code should execute just because a user has opened a notebook that they did not write. Like any other program, once a user decides to execute code in a notebook, it is considered trusted, and should be allowed to do anything.
Our security model#
Untrusted HTML is always sanitized
Untrusted Javascript is never executed
HTML and Javascript in Markdown cells are never trusted
Outputs generated by the user are trusted
Any other HTML or Javascript (in Markdown cells, output generated by others) is never trusted
The central question of trust is “Did the current user do this?”
The details of trust#
When a notebook is executed and saved, a signature is computed from a digest of the notebook’s contents plus a secret key. This is stored in a database, writable only by the current user. By default, this is located at:
~/.local/share/jupyter/nbsignatures.db # Linux
~/Library/Jupyter/nbsignatures.db # OS X
%APPDATA%/jupyter/nbsignatures.db # Windows
Each signature represents a series of outputs which were produced by code the current user executed, and are therefore trusted.
When you open a notebook, the server computes its signature, and checks if it’s in the database. If a match is found, HTML and Javascript output in the notebook will be trusted at load, otherwise it will be untrusted.
Any output generated during an interactive session is trusted.
Updating trust#
A notebook’s trust is updated when the notebook is saved. If there are
any untrusted outputs still in the notebook, the notebook will not be
trusted, and no signature will be stored. If all untrusted outputs have
been removed (either via Clear Output
or re-execution), then the
notebook will become trusted.
While trust is updated per output, this is only for the duration of a single session. A newly loaded notebook file is either trusted or not in its entirety.
Explicit trust#
Sometimes re-executing a notebook to generate trusted output is not an option, either because dependencies are unavailable, or it would take a long time. Users can explicitly trust a notebook in two ways:
At the command-line, with:
jupyter trust /path/to/notebook.ipynb
After loading the untrusted notebook, with
File / Trust Notebook
These two methods simply load the notebook, compute a new signature, and add that signature to the user’s database.
Reporting security issues#
If you find a security vulnerability in Jupyter, either a failure of the code to properly implement the model described here, or a failure of the model itself, please report it to security@ipython.org.
If you prefer to encrypt your security reports,
you can use this PGP public key
.
Affected use cases#
Some use cases that work in Jupyter 1.0 became less convenient in 2.0 as a result of the security changes. We do our best to minimize these annoyances, but security is always at odds with convenience.
Javascript and CSS in Markdown cells#
While never officially supported, it had become common practice to put hidden Javascript or CSS styling in Markdown cells, so that they would not be visible on the page. Since Markdown cells are now sanitized (by Google Caja), all Javascript (including click event handlers, etc.) and CSS will be stripped.
We plan to provide a mechanism for notebook themes, but in the meantime
styling the notebook can only be done via either custom.css
or CSS
in HTML output. The latter only have an effect if the notebook is
trusted, because otherwise the output will be sanitized just like
Markdown.
Collaboration#
When collaborating on a notebook, people probably want to see the outputs produced by their colleagues’ most recent executions. Since each collaborator’s key will differ, this will result in each share starting in an untrusted state. There are three basic approaches to this:
re-run notebooks when you get them (not always viable)
explicitly trust notebooks via
jupyter trust
or the notebook menu (annoying, but easy)share a notebook signatures database, and use configuration dedicated to the collaboration while working on the project.
To share a signatures database among users, you can configure:
c.NotebookNotary.data_dir = "/path/to/signature_dir"
to specify a non-default path to the SQLite database (of notebook hashes, essentially).
Configuring Logging#
Jupyter Server (and Jupyter Server extension applications such as Jupyter Lab) are Traitlets applications.
By default Traitlets applications log to stderr. You can configure them to log to other locations e.g. log files.
Logging is configured via the logging_config
“trait” which accepts a
logging.config.dictConfig()
object. For more information look for
Application.logging_config
in Config file and command line options.
Examples#
Jupyter Server#
A minimal example which logs Jupyter Server output to a file:
c.ServerApp.logging_config = {
"version": 1,
"handlers": {
"logfile": {
"class": "logging.FileHandler",
"level": "DEBUG",
"filename": "jupyter_server.log",
},
},
"loggers": {
"ServerApp": {
"level": "DEBUG",
"handlers": ["console", "logfile"],
},
},
}
Note
To keep the default behaviour of logging to stderr ensure the console
handler (provided by Traitlets) is included in the list of handlers.
Warning
Be aware that the ServerApp
log may contain security tokens. If
redirecting to log files ensure they have appropriate permissions.
Jupyter Server Extension Applications (e.g. Jupyter Lab)#
An example which logs both Jupyter Server and Jupyter Lab output to a file:
Note
Because Jupyter Server and its extension applications are separate Traitlets applications their logging must be configured separately.
c.ServerApp.logging_config = {
"version": 1,
"handlers": {
"logfile": {
"class": "logging.FileHandler",
"level": "DEBUG",
"filename": "jupyter_server.log",
"formatter": "my_format",
},
},
"formatters": {
"my_format": {
"format": "%(asctime)s %(levelname)-8s %(name)-15s %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
},
},
"loggers": {
"ServerApp": {
"level": "DEBUG",
"handlers": ["console", "logfile"],
},
},
}
c.LabApp.logging_config = {
"version": 1,
"handlers": {
"logfile": {
"class": "logging.FileHandler",
"level": "DEBUG",
"filename": "jupyter_server.log",
"formatter": "my_format",
},
},
"formatters": {
"my_format": {
"format": "%(asctime)s %(levelname)-8s %(name)-15s %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
},
},
"loggers": {
"LabApp": {
"level": "DEBUG",
"handlers": ["console", "logfile"],
},
},
}
Note
The configured application name should match the logger name
e.g. c.LabApp.logging_config
defines a logger called LabApp
.
Tip
This diff modifies the example to log Jupyter Server and Jupyter Lab output to different files:
--- before
+++ after
c.LabApp.logging_config = {
'version': 1,
'handlers': {
'logfile': {
'class': 'logging.FileHandler',
'level': 'DEBUG',
- 'filename': 'jupyter_server.log',
+ 'filename': 'jupyter_lab.log',
'formatter': 'my_format',
},
},
Documentation for Developers#
These pages target people writing Jupyter Web applications and server extensions, or people who need to dive deeper in Jupyter Server’s REST API and configuration system.
Architecture Diagrams#
This page describes the Jupyter Server architecture and the main workflows. This information is useful for developers who want to understand how Jupyter Server components are connected and how the principal workflows look like.
To make changes for these diagrams, use the Draw.io open source tool to edit the png file.
Jupyter Server Architecture#
The Jupyter Server system can be seen in the figure below:

Jupyter Server contains the following components:
ServerApp is the main Tornado-based application which connects all components together.
Config Manager initializes configuration for the ServerApp. You can define custom classes for the Jupyter Server managers using this config and change ServerApp settings. Follow the Config File Guide to learn about configuration settings and how to build custom config.
Custom Extensions allow you to create the custom Server’s REST API endpoints. Follow the Extension Guide to know more about extending ServerApp with extra request handlers.
Gateway Server is a web server that, when configured, provides access to Jupyter kernels running on other hosts. There are different ways to create a gateway server. If your ServerApp needs to communicate with remote kernels residing within resource-managed clusters, you can use Enterprise Gateway, otherwise, you can use Kernel Gateway, where kernels run locally to the gateway server.
Contents Manager and File Contents Manager are responsible for serving Notebook on the file system. Session Manager uses Contents Manager to receive kernel path. Follow the Contents API guide to learn about Contents Manager.
Session Manager processes users’ Sessions. When a user starts a new kernel, Session Manager starts a process to provision kernel for the user and generates a new Session ID. Each opened Notebook has a separate Session, but different Notebook kernels can use the same Session. That is useful if the user wants to share data across various opened Notebooks. Session Manager uses SQLite3 database to store the Session information. The database is stored in memory by default, but can be configured to save to disk.
Mapping Kernel Manager is responsible for managing the lifecycles of the kernels running within the ServerApp. It starts a new kernel for a user’s Session and facilitates interrupt, restart, and shutdown operations against the kernel.
Jupyter Client library is used by Jupyter Server to work with the Notebook kernels.
Kernel Manager manages a single kernel for the Notebook. To know more about Kernel Manager, follow the Jupyter Client APIs documentation.
Kernel Spec Manager parses files with JSON specification for a kernels, and provides a list of available kernel configurations. To learn about Kernel Spec Manager, check the Jupyter Client guide.
Create Session Workflow#
The create Session workflow can be seen in the figure below:

When a user starts a new kernel, the following steps occur:
The Notebook client sends the POST /api/sessions request to Jupyter Server. This request has all necessary data, such as Notebook name, type, path, and kernel name.
Session Manager asks Contents Manager for the kernel file system path based on the input data.
Session Manager sends kernel path to Mapping Kernel Manager.
Mapping Kernel Manager starts the kernel create process by using Multi Kernel Manager and Kernel Manager. You can learn more about Multi Kernel Manager in the Jupyter Client APIs.
Kernel Manager uses the provisioner layer to launch a new kernel.
Kernel Provisioner is responsible for launching kernels based on the kernel specification. If the kernel specification doesn’t define a provisioner, it uses Local Provisioner to launch the kernel. You can use Kernel Provisioner Base and Kernel Provisioner Factory to create custom provisioners.
Kernel Spec Manager gets the kernel specification from the JSON file. The specification is located in
kernel.json
file.Once Kernel Provisioner launches the kernel, Kernel Manager generates the new kernel ID for Session Manager.
Session Manager saves the new Session data to the SQLite3 database (Session ID, Notebook path, Notebook name, Notebook type, and kernel ID).
Notebook client receives the created Session data.
Delete Session Workflow#
The delete Session workflow can be seen in the figure below:

When a user stops a kernel, the following steps occur:
The Notebook client sends the DELETE /api/sessions/{session_id} request to Jupyter Server. This request has the Session ID that kernel is currently using.
Session Manager gets the Session data from the SQLite3 database and sends the kernel ID to Mapping Kernel Manager.
Mapping Kernel Manager starts the kernel shutdown process by using Multi Kernel Manager and Kernel Manager.
Kernel Manager determines the mode of interrupt from the Kernel Spec Manager. It supports
Signal
andMessage
interrupt modes. By default, theSignal
interrupt mode is being used.When the interrupt mode is
Signal
, the Kernel Provisioner interrupts the kernel with theSIGINT
operating system signal (although other provisioner implementations may use a different approach).When interrupt mode is
Message
, Session sends the “interrupt_request” message on the control channel.
After interrupting kernel, Session sends the “shutdown_request” message on the control channel.
Kernel Manager waits for the kernel shutdown. After the timeout, and if it detects the kernel process is still running, the Kernel Manager terminates the kernel sending a
SIGTERM
operating system signal (or provisioner equivalent). If it finds the kernel process has not terminated, the Kernel Manager will follow up with aSIGKILL
operating system signal (or provisioner equivalent) to ensure the kernel’s termination.Kernel Manager cleans up the kernel resources. It removes kernel’s interprocess communication ports, closes control socket, and releases Shell, IOPub, StdIn, Control, and Heartbeat ports.
When shutdown is finished, Session Manager deletes the Session data from the SQLite3 database and responses 204 status code to the Notebook client.
Depending on Jupyter Server#
If your project depends directly on Jupyter Server, be sure to watch Jupyter Server’s ChangeLog and pin your project to a version that works for your application. Major releases represent possible backwards-compatibility breaking API changes or features.
When a new major version in released on PyPI, a branch for that version will be created in this repository, and the version of the master branch will be bumped to the next major version number. That way, the master branch always reflects the latest un-released version.
To install the latest patch of a given version:
> pip install jupyter_server --upgrade
To pin your jupyter_server install to a specific version:
> pip install jupyter_server==1.0.0
The REST API#
An interactive version is available here.
- GET /api/#
Get the Jupyter Server version
This endpoint returns only the Jupyter Server version. It does not require any authentication.
- Status Codes:
200 OK – Jupyter Server version information
- Response JSON Object:
version (string) – The Jupyter Server version number as a string.
- GET /api/contents/{path}#
Get contents of file or directory
A client can optionally specify a type and/or format argument via URL parameter. When given, the Contents service shall return a model in the requested type and/or format. If the request cannot be satisfied, e.g. type=text is requested, but the file is binary, then the request shall fail with 400 and have a JSON response containing a ‘reason’ field, with the value ‘bad format’ or ‘bad type’, depending on what was requested.
- Parameters:
path (string) – file path
- Query Parameters:
type (string) – File type (‘file’, ‘directory’)
format (string) – How file content should be returned (‘text’, ‘base64’)
content (integer) – Return content (0 for no content, 1 for return content)
hash (integer) – May return hash hexdigest string of content and the hash algorithm (0 for no hash - default, 1 for return hash). It may be ignored by the content manager.
- Status Codes:
200 OK – Contents of file or directory
400 Bad Request – Bad request
404 Not Found – No item found
500 Internal Server Error – Model key error
- Response Headers:
Last-Modified – Last modified date for file
- Response JSON Object:
content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)
created (string) – Creation timestamp (required)
format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)
hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.
hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.
last_modified (string) – Last modified timestamp (required)
mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)
name (string) – Name of file or directory, equivalent to the last part of the path (required)
path (string) – Full path for file or directory (required)
size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.
type (string) – Type of content (required)
writable (boolean) – indicates whether the requester has permission to edit the file (required)
- POST /api/contents/{path}#
Create a new file in the specified path
A POST to /api/contents/path creates a New untitled, empty file or directory. A POST to /api/contents/path with body {‘copy_from’: ‘/path/to/OtherNotebook.ipynb’} creates a new copy of OtherNotebook in path.
- Parameters:
path (string) – file path
- Request JSON Object:
copy_from (string) –
ext (string) –
type (string) –
- Status Codes:
201 Created – File created
400 Bad Request – Bad request
404 Not Found – No item found
- Response Headers:
Location – URL for the new file
- Response JSON Object:
content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)
created (string) – Creation timestamp (required)
format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)
hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.
hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.
last_modified (string) – Last modified timestamp (required)
mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)
name (string) – Name of file or directory, equivalent to the last part of the path (required)
path (string) – Full path for file or directory (required)
size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.
type (string) – Type of content (required)
writable (boolean) – indicates whether the requester has permission to edit the file (required)
- PATCH /api/contents/{path}#
Rename a file or directory without re-uploading content
- Parameters:
path (string) – file path
- Request JSON Object:
path (string) – New path for file or directory
- Status Codes:
200 OK – Path updated
400 Bad Request – No data provided
- Response Headers:
Location – Updated URL for the file or directory
- Response JSON Object:
content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)
created (string) – Creation timestamp (required)
format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)
hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.
hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.
last_modified (string) – Last modified timestamp (required)
mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)
name (string) – Name of file or directory, equivalent to the last part of the path (required)
path (string) – Full path for file or directory (required)
size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.
type (string) – Type of content (required)
writable (boolean) – indicates whether the requester has permission to edit the file (required)
- PUT /api/contents/{path}#
Save or upload file.
Saves the file in the location specified by name and path. PUT is very similar to POST, but the requester specifies the name, whereas with POST, the server picks the name.
- Parameters:
path (string) – file path
- Request JSON Object:
content (string) – The actual body of the document excluding directory type
format (string) – File format (‘json’, ‘text’, ‘base64’)
name (string) – The new filename if changed
path (string) – New path for file or directory
type (string) – Path dtype (‘notebook’, ‘file’, ‘directory’)
- Status Codes:
200 OK – File saved
201 Created – Path created
400 Bad Request – No data provided
- Response Headers:
Location – Updated URL for the file or directory
Location – URL for the file or directory
- Response JSON Object:
content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)
created (string) – Creation timestamp (required)
format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)
hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.
hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.
last_modified (string) – Last modified timestamp (required)
mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)
name (string) – Name of file or directory, equivalent to the last part of the path (required)
path (string) – Full path for file or directory (required)
size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.
type (string) – Type of content (required)
writable (boolean) – indicates whether the requester has permission to edit the file (required)
content – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)
created – Creation timestamp (required)
format – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)
hash – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.
hash_algorithm – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.
last_modified – Last modified timestamp (required)
mimetype – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)
name – Name of file or directory, equivalent to the last part of the path (required)
path – Full path for file or directory (required)
size – The size of the file or notebook in bytes. If no size is provided, defaults to null.
type – Type of content (required)
writable – indicates whether the requester has permission to edit the file (required)
- DELETE /api/contents/{path}#
Delete a file in the given path
- Parameters:
path (string) – file path
- Status Codes:
204 No Content – File deleted
- Response Headers:
Location – URL for the removed file
- GET /api/contents/{path}/checkpoints#
Get a list of checkpoints for a file
List checkpoints for a given file. There will typically be zero or one results.
- Parameters:
path (string) – file path
- Status Codes:
200 OK – List of checkpoints for a file
400 Bad Request – Bad request
404 Not Found – No item found
500 Internal Server Error – Model key error
- Response JSON Object:
[].id (string) – Unique id for the checkpoint. (required)
[].last_modified (string) – Last modified timestamp (required)
- POST /api/contents/{path}/checkpoints#
Create a new checkpoint for a file
Create a new checkpoint with the current state of a file. With the default FileContentsManager, only one checkpoint is supported, so creating new checkpoints clobbers existing ones.
- Parameters:
path (string) – file path
- Status Codes:
201 Created – Checkpoint created
400 Bad Request – Bad request
404 Not Found – No item found
- Response Headers:
Location – URL for the checkpoint
- Response JSON Object:
id (string) – Unique id for the checkpoint. (required)
last_modified (string) – Last modified timestamp (required)
- POST /api/contents/{path}/checkpoints/{checkpoint_id}#
Restore a file to a particular checkpointed state
- Parameters:
path (string) – file path
checkpoint_id (string) – Checkpoint id for a file
- Status Codes:
204 No Content – Checkpoint restored
400 Bad Request – Bad request
- DELETE /api/contents/{path}/checkpoints/{checkpoint_id}#
Delete a checkpoint
- Parameters:
path (string) – file path
checkpoint_id (string) – Checkpoint id for a file
- Status Codes:
204 No Content – Checkpoint deleted
- GET /api/sessions/{session}#
Get session
- Parameters:
session (string) – session uuid
- Status Codes:
200 OK – Session
- Response JSON Object:
id (string) –
kernel (any) – Kernel information
name (string) – name of the session
path (string) – path to the session
type (string) – session type
- PATCH /api/sessions/{session}#
This can be used to rename the session.
- Parameters:
session (string) – session uuid
- Request JSON Object:
id (string) –
kernel (any) – Kernel information
name (string) – name of the session
path (string) – path to the session
type (string) – session type
- Status Codes:
200 OK – Session
400 Bad Request – No data provided
- Response JSON Object:
id (string) –
kernel (any) – Kernel information
name (string) – name of the session
path (string) – path to the session
type (string) – session type
- DELETE /api/sessions/{session}#
Delete a session
- Parameters:
session (string) – session uuid
- Status Codes:
204 No Content – Session (and kernel) were deleted
410 Gone – Kernel was deleted before the session, and the session was not deleted (TODO - check to make sure session wasn’t deleted)
- GET /api/sessions#
List available sessions
- Status Codes:
200 OK – List of current sessions
- Response JSON Object:
[].id (string) –
[].kernel (any) – Kernel information
[].name (string) – name of the session
[].path (string) – path to the session
[].type (string) – session type
- POST /api/sessions#
Create a new session, or return an existing session if a session of the same name already exists
- Request JSON Object:
id (string) –
kernel (any) – Kernel information
name (string) – name of the session
path (string) – path to the session
type (string) – session type
- Status Codes:
201 Created – Session created or returned
501 Not Implemented – Session not available
- Response Headers:
Location – URL for session commands
- Response JSON Object:
id (string) –
kernel (any) – Kernel information
name (string) – name of the session
path (string) – path to the session
type (string) – session type
- GET /api/kernels#
List the JSON data for all kernels that are currently running
- Status Codes:
200 OK – List of currently-running kernel uuids
- Response JSON Object:
[] (any) – Kernel information
- POST /api/kernels#
Start a kernel and return the uuid
- Request JSON Object:
name (string) – Kernel spec name (defaults to default kernel spec for server) (required)
path (string) – API path from root to the cwd of the kernel
- Status Codes:
201 Created – Kernel started
- Response Headers:
Location – Model for started kernel
- GET /api/kernels/{kernel_id}#
Get kernel information
- Parameters:
kernel_id (string) – kernel uuid
- Status Codes:
200 OK – Kernel information
- DELETE /api/kernels/{kernel_id}#
Kill a kernel and delete the kernel id
- Parameters:
kernel_id (string) – kernel uuid
- Status Codes:
204 No Content – Kernel deleted
- POST /api/kernels/{kernel_id}/interrupt#
Interrupt a kernel
- Parameters:
kernel_id (string) – kernel uuid
- Status Codes:
204 No Content – Kernel interrupted
- POST /api/kernels/{kernel_id}/restart#
Restart a kernel
- Parameters:
kernel_id (string) – kernel uuid
- Status Codes:
200 OK – Kernel restarted
- Response Headers:
Location – URL for kernel commands
- GET /api/kernelspecs#
Get kernel specs
- Status Codes:
200 OK – Kernel specs
- Response JSON Object:
default (string) – Default kernel name
kernelspecs (object) –
- GET /api/config/{section_name}#
Get a configuration section by name
- Parameters:
section_name (string) – Name of config section
- Status Codes:
200 OK – Configuration object
- PATCH /api/config/{section_name}#
Update a configuration section by name
- Parameters:
section_name (string) – Name of config section
- Status Codes:
200 OK – Configuration object
- GET /api/terminals#
Get available terminals
- Status Codes:
200 OK – A list of all available terminal ids.
403 Forbidden – Forbidden to access
404 Not Found – Not found
- Response JSON Object:
[].last_activity (string) – ISO 8601 timestamp for the last-seen activity on this terminal. Use this to identify which terminals have been inactive since a given time. Timestamps will be UTC, indicated ‘Z’ suffix.
[].name (string) – name of terminal (required)
- POST /api/terminals#
Create a new terminal
- Status Codes:
200 OK – Successfully created a new terminal
403 Forbidden – Forbidden to access
404 Not Found – Not found
- Response JSON Object:
last_activity (string) – ISO 8601 timestamp for the last-seen activity on this terminal. Use this to identify which terminals have been inactive since a given time. Timestamps will be UTC, indicated ‘Z’ suffix.
name (string) – name of terminal (required)
- GET /api/terminals/{terminal_id}#
Get a terminal session corresponding to an id.
- Parameters:
terminal_id (string) – ID of terminal session
- Status Codes:
200 OK – Terminal session with given id
403 Forbidden – Forbidden to access
404 Not Found – Not found
- Response JSON Object:
last_activity (string) – ISO 8601 timestamp for the last-seen activity on this terminal. Use this to identify which terminals have been inactive since a given time. Timestamps will be UTC, indicated ‘Z’ suffix.
name (string) – name of terminal (required)
- DELETE /api/terminals/{terminal_id}#
Delete a terminal session corresponding to an id.
- Parameters:
terminal_id (string) – ID of terminal session
- Status Codes:
204 No Content – Successfully deleted terminal session
403 Forbidden – Forbidden to access
404 Not Found – Not found
- GET /api/me#
Get the identity of the currently authenticated user. If present, a `permissions` argument may be specified to check what actions the user currently is authorized to take.
- Query Parameters:
permissions (string) – JSON-serialized dictionary of
{"resource": ["action",]}
(dict of lists of strings) to check. The same dictionary structure will be returned, containing only the actions for which the user is authorized.
- Status Codes:
200 OK – The user’s identity and permissions
- Response JSON Object:
identity (any) – The identity of the currently authenticated user
permissions (object) – A dict of the form:
{"resource": ["action",]}
containing only the AUTHORIZED subset of resource+actions from the permissions specified in the request. If no permission checks were made in the request, this will be empty.
Server Extensions#
A Jupyter Server extension is typically a module or package that extends to Server’s REST API/endpoints—i.e. adds extra request handlers to Server’s Tornado Web Application.
For examples of jupyter server extensions, see the homepage.
To get started writing your own extension, see the simple examples in the examples folder in the GitHub jupyter_server repository.
Distributing a server extension#
Putting it all together, authors can distribute their extension following this steps:
- Add a
_jupyter_server_extension_points()
function at the extension’s root. This function should likely live in the
__init__.py
found at the root of the extension package. It will look something like this:# Found in the __init__.py of package def _jupyter_server_extension_points(): return [{"module": "myextension.app", "app": MyExtensionApp}]
- Add a
- Create an extension by writing a
_load_jupyter_server_extension()
function or subclassingExtensionApp
. This is where the extension logic will live (i.e. custom extension handlers, config, etc). See the sections above for more information on how to create an extension.
- Create an extension by writing a
- Add the following JSON config file to the extension package.
The file should be named after the extension (e.g.
myextension.json
) and saved in a subdirectory of the package with the prefix:jupyter-config/jupyter_server_config.d/
. The extension package will have a similar structure to this example:myextension ├── myextension/ │ ├── __init__.py │ └── app.py ├── jupyter-config/ │ └── jupyter_server_config.d/ │ └── myextension.json └── setup.py
The contents of the JSON file will tell Jupyter Server to load the extension when a user installs the package:
{ "ServerApp": { "jpserver_extensions": { "myextension": true } } }
When the extension is installed, this JSON file will be copied to the
jupyter_server_config.d
directory found in one of Jupyter’s paths.Users can toggle the enabling/disableing of extension using the command:
jupyter server extension disable myextension
which will change the boolean value in the JSON file above.
- Create a
setup.py
that automatically enables the extension. Add a few extra lines the extension package’s
setup
functionfrom setuptools import setup setup( name="myextension", # ... include_package_data=True, data_files=[ ( "etc/jupyter/jupyter_server_config.d", ["jupyter-config/jupyter_server_config.d/myextension.json"], ), ], )
- Create a
Migrating an extension to use Jupyter Server#
If you’re a developer of a classic Notebook Server extension, your extension
should be able to work with both the classic notebook server and
jupyter_server
.
There are a few key steps to make this happen:
- Point Jupyter Server to the
load_jupyter_server_extension
function with a new reference name. The
load_jupyter_server_extension
function was the key to loading a server extension in the classic Notebook Server. Jupyter Server expects the name of this function to be prefixed with an underscore—i.e._load_jupyter_server_extension
. You can easily achieve this by adding a reference to the old function name with the new name in the same module.def load_jupyter_server_extension(nb_server_app): ... # Reference the old function name with the new function name. _load_jupyter_server_extension = load_jupyter_server_extension
- Point Jupyter Server to the
- Add new data files to your extension package that enable it with Jupyter Server.
This new file can go next to your classic notebook server data files. Create a new sub-directory,
jupyter_server_config.d
, and add a new.json
file there:myextension ├── myextension/ │ ├── __init__.py │ └── app.py ├── jupyter-config/ │ └── jupyter_notebook_config.d/ │ └── myextension.json │ └── jupyter_server_config.d/ │ └── myextension.json └── setup.py
The new
.json
file should look something like this (you’ll notice the changes in the configured class and trait names):{ "ServerApp": { "jpserver_extensions": { "myextension": true } } }
Update your extension package’s
setup.py
so that the data-files are moved into the jupyter configuration directories when users download the package.from setuptools import setup setup( name="myextension", # ... include_package_data=True, data_files=[ ( "etc/jupyter/jupyter_server_config.d", ["jupyter-config/jupyter_server_config.d/myextension.json"], ), ( "etc/jupyter/jupyter_notebook_config.d", ["jupyter-config/jupyter_notebook_config.d/myextension.json"], ), ], )
- (Optional) Point extension at the new favicon location.
The favicons in the Jupyter Notebook have been moved to a new location in Jupyter Server. If your extension is using one of these icons, you’ll want to add a set of redirect handlers this. (In
ExtensionApp
, this is handled automatically).This usually means adding a chunk to your
load_jupyter_server_extension
function similar to this:def load_jupyter_server_extension(nb_server_app): web_app = nb_server_app.web_app host_pattern = ".*$" base_url = web_app.settings["base_url"] # Add custom extensions handler. custom_handlers = [ # ... ] # Favicon redirects. favicon_redirects = [ ( url_path_join(base_url, "/static/favicons/favicon.ico"), RedirectHandler, { "url": url_path_join( serverapp.base_url, "static/base/images/favicon.ico" ) }, ), ( url_path_join(base_url, "/static/favicons/favicon-busy-1.ico"), RedirectHandler, { "url": url_path_join( serverapp.base_url, "static/base/images/favicon-busy-1.ico" ) }, ), ( url_path_join(base_url, "/static/favicons/favicon-busy-2.ico"), RedirectHandler, { "url": url_path_join( serverapp.base_url, "static/base/images/favicon-busy-2.ico" ) }, ), ( url_path_join(base_url, "/static/favicons/favicon-busy-3.ico"), RedirectHandler, { "url": url_path_join( serverapp.base_url, "static/base/images/favicon-busy-3.ico" ) }, ), ( url_path_join(base_url, "/static/favicons/favicon-file.ico"), RedirectHandler, { "url": url_path_join( serverapp.base_url, "static/base/images/favicon-file.ico" ) }, ), ( url_path_join(base_url, "/static/favicons/favicon-notebook.ico"), RedirectHandler, { "url": url_path_join( serverapp.base_url, "static/base/images/favicon-notebook.ico" ) }, ), ( url_path_join(base_url, "/static/favicons/favicon-terminal.ico"), RedirectHandler, { "url": url_path_join( serverapp.base_url, "static/base/images/favicon-terminal.ico" ) }, ), ( url_path_join(base_url, "/static/logo/logo.png"), RedirectHandler, {"url": url_path_join(serverapp.base_url, "static/base/images/logo.png")}, ), ] web_app.add_handlers(host_pattern, custom_handlers + favicon_redirects)
File save hooks#
You can configure functions that are run whenever a file is saved. There are two hooks available:
ContentsManager.pre_save_hook
runs on the API path and model with content. This can be used for things like stripping output that people don’t like adding to VCS noise.FileContentsManager.post_save_hook
runs on the filesystem path and model without content. This could be used to commit changes after every save, for instance.
They are both called with keyword arguments:
pre_save_hook(model=model, path=path, contents_manager=cm)
post_save_hook(model=model, os_path=os_path, contents_manager=cm)
Examples#
These can both be added to jupyter_server_config.py
.
A pre-save hook for stripping output:
def scrub_output_pre_save(model, **kwargs):
"""scrub output before saving notebooks"""
# only run on notebooks
if model['type'] != 'notebook':
return
# only run on nbformat v4
if model['content']['nbformat'] != 4:
return
for cell in model['content']['cells']:
if cell['cell_type'] != 'code':
continue
cell['outputs'] = []
cell['execution_count'] = None
c.FileContentsManager.pre_save_hook = scrub_output_pre_save
A post-save hook to make a script equivalent whenever the notebook is saved
(replacing the --script
option in older versions of the notebook):
import io
import os
from jupyter_server.utils import to_api_path
_script_exporter = None
def script_post_save(model, os_path, contents_manager, **kwargs):
"""convert notebooks to Python script after save with nbconvert
replaces `ipython notebook --script`
"""
from nbconvert.exporters.script import ScriptExporter
if model["type"] != "notebook":
return
global _script_exporter
if _script_exporter is None:
_script_exporter = ScriptExporter(parent=contents_manager)
log = contents_manager.log
base, ext = os.path.splitext(os_path)
py_fname = base + ".py"
script, resources = _script_exporter.from_filename(os_path)
script_fname = base + resources.get("output_extension", ".txt")
log.info("Saving script /%s", to_api_path(script_fname, contents_manager.root_dir))
with io.open(script_fname, "w", encoding="utf-8") as f:
f.write(script)
c.FileContentsManager.post_save_hook = script_post_save
This could be a simple call to jupyter nbconvert --to script
, but spawning
the subprocess every time is quite slow.
Note
Assigning a new hook to e.g. c.FileContentsManager.pre_save_hook
will override any existing one.
If you want to add new hooks and keep existing ones, you should use e.g.:
contents_manager.register_pre_save_hook(script_pre_save)
contents_manager.register_post_save_hook(script_post_save)
Hooks will then be called in the order they were registered.
Contents API#
The Jupyter Notebook web application provides a graphical interface for creating, opening, renaming, and deleting files in a virtual filesystem.
The ContentsManager
class defines an abstract
API for translating these interactions into operations on a particular storage
medium. The default implementation,
FileContentsManager
, uses the local
filesystem of the server for storage and straightforwardly serializes notebooks
into JSON. Users can override these behaviors by supplying custom subclasses
of ContentsManager.
This section describes the interface implemented by ContentsManager subclasses. We refer to this interface as the Contents API.
Data Model#
Filesystem Entities#
ContentsManager methods represent virtual filesystem entities as dictionaries, which we refer to as models.
Models may contain the following entries:
Key |
Type |
Info |
---|---|---|
name |
unicode |
Basename of the entity. |
path |
unicode |
Full (API-style) path to the entity. |
type |
unicode |
The entity type. One of
|
created |
datetime |
Creation date of the entity. |
last_modified |
datetime |
Last modified date of the entity. |
content |
variable |
The “content” of the entity. (See Below) |
mimetype |
unicode or
|
The mimetype of |
format |
unicode or
|
The format of |
[optional] hash |
unicode or
|
The hash of the contents.
It cannot be null if
|
[optional] hash_algorithm |
unicode or
|
The algorithm used to compute
hash value.
It cannot be null
if |
Certain model fields vary in structure depending on the type
field of the
model. There are three model types: notebook, file, and directory.
notebook
modelsThe
format
field is always"json"
.The
mimetype
field is alwaysNone
.The
content
field contains anbformat.notebooknode.NotebookNode
representing the .ipynb file represented by the model. See the NBFormat documentation for a full description.The
hash
field a hexdigest string of the hash value of the file. IfContentManager.get
not support hash, it should always beNone
.hash_algorithm
is the algorithm used to compute the hash value.
file
modelsThe
format
field is either"text"
or"base64"
.The
mimetype
field istext/plain
for text-format models andapplication/octet-stream
for base64-format models.The
content
field is always of typeunicode
. For text-format file models,content
simply contains the file’s bytes after decoding as UTF-8. Non-text (base64
) files are read as bytes, base64 encoded, and then decoded as UTF-8.The
hash
field a hexdigest string of the hash value of the file. IfContentManager.get
not support hash, it should always beNone
.hash_algorithm
is the algorithm used to compute the hash value.
directory
modelsThe
format
field is always"json"
.The
mimetype
field is alwaysNone
.The
content
field contains a list of content-free models representing the entities in the directory.The
hash
field is alwaysNone
.
Note
In certain circumstances, we don’t need the full content of an entity to
complete a Contents API request. In such cases, we omit the mimetype
,
content
, and format
keys from the model. This most commonly occurs
when listing a directory, in which circumstance we represent files within
the directory as content-less models to avoid having to recursively traverse
and serialize the entire filesystem.
Sample Models
# Notebook Model with Content and Hash
{
"content": {
"metadata": {},
"nbformat": 4,
"nbformat_minor": 0,
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": "Some **Markdown**",
},
],
},
"created": datetime(2015, 7, 25, 19, 50, 19, 19865),
"format": "json",
"last_modified": datetime(2015, 7, 25, 19, 50, 19, 19865),
"mimetype": None,
"name": "a.ipynb",
"path": "foo/a.ipynb",
"type": "notebook",
"writable": True,
"hash": "f5e43a0b1c2e7836ab3b4d6b1c35c19e2558688de15a6a14e137a59e4715d34b",
"hash_algorithm": "sha256",
}
# Notebook Model without Content
{
"content": None,
"created": datetime.datetime(2015, 7, 25, 20, 17, 33, 271931),
"format": None,
"last_modified": datetime.datetime(2015, 7, 25, 20, 17, 33, 271931),
"mimetype": None,
"name": "a.ipynb",
"path": "foo/a.ipynb",
"type": "notebook",
"writable": True,
}
API Paths#
ContentsManager methods represent the locations of filesystem resources as API-style paths. Such paths are interpreted as relative to the root directory of the notebook server. For compatibility across systems, the following guarantees are made:
Paths are always
unicode
, notbytes
.Paths are not URL-escaped.
Paths are always forward-slash (/) delimited, even on Windows.
Leading and trailing slashes are stripped. For example,
/foo/bar/buzz/
becomesfoo/bar/buzz
.The empty string (
""
) represents the root directory.
Writing a Custom ContentsManager#
The default ContentsManager is designed for users running the notebook as an
application on a personal computer. It stores notebooks as .ipynb files on the
local filesystem, and it maps files and directories in the Notebook UI to files
and directories on disk. It is possible to override how notebooks are stored
by implementing your own custom subclass of ContentsManager
. For example,
if you deploy the notebook in a context where you don’t trust or don’t have
access to the filesystem of the notebook server, it’s possible to write your
own ContentsManager that stores notebooks and files in a database.
Required Methods#
A minimal complete implementation of a custom
ContentsManager
must implement the following
methods:
|
Get a file or directory model. |
|
Save a file or directory model to path. |
Delete the file or directory at path. |
|
|
Rename a file or directory. |
|
Does a file exist at the given path? |
Does a directory exist at the given path? |
|
Is path a hidden directory or file? |
You may be required to specify a Checkpoints object, as the default one,
FileCheckpoints
, could be incompatible with your custom
ContentsManager.
Customizing Checkpoints#
Customized Checkpoint definitions allows behavior to be altered and extended.
The Checkpoints
and GenericCheckpointsMixin
classes
(from jupyter_server.services.contents.checkpoints
)
have reusable code and are intended to be used together,
but require the following methods to be implemented.
|
Rename a single checkpoint from old_path to new_path. |
Return a list of checkpoints for a given file |
|
|
delete a checkpoint for a file |
Create a checkpoint of the current state of a file |
|
Create a checkpoint of the current state of a file |
|
Get the content of a checkpoint for a non-notebook file. |
|
Get the content of a checkpoint for a notebook. |
No-op example#
Here is an example of a no-op checkpoints object - note the mixin comes first. The docstrings indicate what each method should do or return for a more complete implementation.
class NoOpCheckpoints(GenericCheckpointsMixin, Checkpoints):
"""requires the following methods:"""
def create_file_checkpoint(self, content, format, path):
"""-> checkpoint model"""
def create_notebook_checkpoint(self, nb, path):
"""-> checkpoint model"""
def get_file_checkpoint(self, checkpoint_id, path):
"""-> {'type': 'file', 'content': <str>, 'format': {'text', 'base64'}}"""
def get_notebook_checkpoint(self, checkpoint_id, path):
"""-> {'type': 'notebook', 'content': <output of nbformat.read>}"""
def delete_checkpoint(self, checkpoint_id, path):
"""deletes a checkpoint for a file"""
def list_checkpoints(self, path):
"""returns a list of checkpoint models for a given file,
default just does one per file
"""
return []
def rename_checkpoint(self, checkpoint_id, old_path, new_path):
"""renames checkpoint from old path to new path"""
See GenericFileCheckpoints
in notebook.services.contents.filecheckpoints
for a more complete example.
Testing#
jupyter_server.services.contents.tests
includes several test suites written
against the abstract Contents API. This means that an excellent way to test a
new ContentsManager subclass is to subclass our tests to make them use your
ContentsManager.
Note
PGContents is an example of a complete implementation of a custom
ContentsManager
. It stores notebooks and files in PostgreSQL and encodes
directories as SQL relations. PGContents also provides an example of how to
reuse the notebook’s tests.
Asynchronous Support#
An asynchronous version of the Contents API is available to run slow IO processes concurrently.
AsyncContentsManager
AsyncFileContentsManager
AsyncLargeFileManager
AsyncCheckpoints
AsyncGenericCheckpointsMixin
Note
In most cases, the non-asynchronous Contents API is performant for local filesystems. However, if the Jupyter Notebook web application is interacting with a high-latent virtual filesystem, you may see performance gains by using the asynchronous version. For example, if you’re experiencing terminal lag in the web application due to the slow and blocking file operations, the asynchronous version can reduce the lag. Before opting in, comparing both non-async and async options’ performances is recommended.
WebSocket kernel wire protocols#
The Jupyter Server needs to pass messages between kernels and the Jupyter web application. Kernels use ZeroMQ sockets, and the web application uses a WebSocket.
ZeroMQ wire protocol#
The kernel wire protocol over ZeroMQ takes advantage of multipart messages, allowing to decompose a message into parts and to send and receive them unmerged. The following table shows the message format (the beginning has been omitted for clarity):
… |
0 |
1 |
2 |
3 |
4 |
5 |
… |
---|---|---|---|---|---|---|---|
… |
header |
parent_header |
metadata |
content |
buffer_0 |
buffer_1 |
… |
See also the Jupyter Client documentation.
Note that a set of ZeroMQ sockets, one for each channel (shell, iopub, etc.), are multiplexed into one WebSocket. Thus, the channel name must be encoded in WebSocket messages.
WebSocket protocol negotiation#
When opening a WebSocket, the Jupyter web application can optionally provide a list of subprotocols it supports (see e.g. the MDN documentation). If nothing is provided (empty list), then the Jupyter Server assumes the default protocol will be used. Otherwise, the Jupyter Server must select one of the provided subprotocols, or none of them. If none of them is selected, the Jupyter Server must reply with an empty string, which means that the default protocol will be used.
Default WebSocket protocol#
The Jupyter Server must support the default protocol, in which a kernel message is serialized over WebSocket as follows:
0 |
4 |
8 |
… |
offset_0 |
offset_1 |
offset_2 |
… |
---|---|---|---|---|---|---|---|
offset_0 |
offset_1 |
offset_2 |
… |
msg |
buffer_0 |
buffer_1 |
… |
Where:
offset_0
is the position of the kernel message (msg
) from the beginning of this message, in bytes.offset_1
is the position of the first binary buffer (buffer_0
) from the beginning of this message, in bytes (optional).offset_2
is the position of the second binary buffer (buffer_1
) from the beginning of this message, in bytes (optional).msg
is the kernel message, excluding binary buffers and including the channel name, as a UTF8-encoded stringified JSON.buffer_0
is the first binary buffer (optional).buffer_1
is the second binary buffer (optional).
The message can be deserialized by parsing msg
as a JSON object (after decoding it to a string):
msg = {
"channel": channel,
"header": header,
"parent_header": parent_header,
"metadata": metadata,
"content": content,
}
Then retrieving the channel name, and updating with the buffers, if any:
buffers = {
[
buffer_0,
buffer_1
# ...
]
}
v1.kernel.websocket.jupyter.org
protocol#
The Jupyter Server can optionally support the v1.kernel.websocket.jupyter.org
protocol, in which a kernel message is serialized over WebSocket as follows:
0 |
8 |
16 |
… |
8*offset_number |
offset_0 |
offset_1 |
offset_2 |
offset_3 |
offset_4 |
offset_5 |
offset_6 |
… |
---|---|---|---|---|---|---|---|---|---|---|---|---|
offset_number |
offset_0 |
offset_1 |
… |
offset_n |
channel |
header |
parent_header |
metadata |
content |
buffer_0 |
buffer_1 |
… |
Where:
offset_number
is a 64-bit (little endian) unsigned integer.offset_0
tooffset_n
are 64-bit (little endian) unsigned integers (withn=offset_number-1
).channel
is a UTF-8 encoded string containing the channel for the message (shell, iopub, etc.).header
,parent_header
,metadata
, andcontent
are UTF-8 encoded JSON text representing the given part of a message in the Jupyter message protocol.offset_n
is the number of bytes in the message.The message can be deserialized from the
bin_msg
serialized message as follows (Python code):
import json
channel = bin_msg[offset_0:offset_1].decode("utf-8")
header = json.loads(bin_msg[offset_1:offset_2])
parent_header = json.loads(bin_msg[offset_2:offset_3])
metadata = json.loads(bin_msg[offset_3:offset_4])
content = json.loads(bin_msg[offset_4:offset_5])
buffer_0 = bin_msg[offset_5:offset_6]
buffer_1 = bin_msg[offset_6:offset_7]
# ...
last_buffer = bin_msg[offset_n_minus_1:offset_n]
jupyter_server#
jupyter_server package#
Subpackages#
jupyter_server.auth package#
Module contents#
jupyter_server.base package#
Submodules#
Provides access to variables pertaining to specific call contexts.
- class jupyter_server.base.call_context.CallContext#
Bases:
object
CallContext essentially acts as a namespace for managing context variables.
Although not required, it is recommended that any “file-spanning” context variable names (i.e., variables that will be set or retrieved from multiple files or services) be added as constants to this class definition.
- classmethod context_variable_names()#
Returns a list of variable names set for this call context.
- Returns:
names – A list of variable names set for this call context.
- Return type:
List[str]
- classmethod get(name)#
Returns the value corresponding the named variable relative to this context.
If the named variable doesn’t exist, None will be returned.
- Parameters:
name (str) – The name of the variable to get from the call context
- Returns:
value – The value associated with the named variable for this call context
- Return type:
Any
Base Tornado handlers for the Jupyter server.
- class jupyter_server.base.handlers.APIHandler(application, request, **kwargs)#
Bases:
JupyterHandler
Base class for API handlers
- class jupyter_server.base.handlers.APIVersionHandler(application, request, **kwargs)#
Bases:
APIHandler
An API handler for the server version.
- class jupyter_server.base.handlers.AuthenticatedFileHandler(application, request, **kwargs)#
Bases:
JupyterHandler
,StaticFileHandler
static files should only be accessible when logged in
- auth_resource = 'contents'#
- compute_etag()#
Compute the etag.
- Return type:
str | None
- class jupyter_server.base.handlers.AuthenticatedHandler(application, request, **kwargs)#
Bases:
RequestHandler
A RequestHandler with an authenticated user.
- property authorizer: Authorizer#
- property content_security_policy: str#
The default Content-Security-Policy header
Can be overridden by defining Content-Security-Policy in settings[‘headers’]
- force_clear_cookie(name, path='/', domain=None)#
Force a cookie clear.
- Return type:
None
- property identity_provider: IdentityProvider#
- property login_available: bool#
May a user proceed to log in?
This returns True if login capability is available, irrespective of whether the user is already logged in or not.
- skip_check_origin()#
Ask my login_handler if I should skip the origin_check
For example: in the default LoginHandler, if a request is token-authenticated, origin checking should be skipped.
- Return type:
- class jupyter_server.base.handlers.FileFindHandler(application, request, **kwargs)#
Bases:
JupyterHandler
,StaticFileHandler
subclass of StaticFileHandler for serving files from a search path
The setting “static_immutable_cache” can be set up to serve some static file as immutable (e.g. file name containing a hash). The setting is a list of base URL, every static file URL starting with one of those will be immutable.
- compute_etag()#
Compute the etag.
- Return type:
str | None
- classmethod get_absolute_path(roots, path)#
locate a file to serve on our static file search path
- Return type:
- initialize(path, default_filename=None, no_cache_paths=None)#
Initialize the file find handler.
- Return type:
None
- root: tuple[str]#
- validate_absolute_path(root, absolute_path)#
check if the file should be served (raises 404, 403, etc.)
- Return type:
str | None
- class jupyter_server.base.handlers.FilesRedirectHandler(application, request, **kwargs)#
Bases:
JupyterHandler
Handler for redirecting relative URLs to the /files/ handler
- class jupyter_server.base.handlers.JupyterHandler(application, request, **kwargs)#
Bases:
AuthenticatedHandler
Jupyter-specific extensions to authenticated handling
Mostly property shortcuts to Jupyter-specific settings.
- check_host()#
Check the host header if remote access disallowed.
Returns True if the request should continue, False otherwise.
- Return type:
- check_origin(origin_to_satisfy_tornado='')#
Check Origin for cross-site API requests, including websockets
Copied from WebSocket with changes: :rtype:
bool
allow unspecified host/origin (e.g. scripts)
allow token-authenticated requests
- check_referer()#
Check Referer for cross-site requests. Disables requests to certain endpoints with external or missing Referer. If set, allow_origin settings are applied to the Referer to whitelist specific cross-origin sites. Used on GET for api endpoints and /files/ to block cross-site inclusion (XSSI).
- Return type:
- property config_manager: ConfigManager#
- property contents_manager: ContentsManager#
- property event_logger: EventLogger#
- get_json_body()#
Return the body of the request as JSON data.
- Return type:
dict[str, Any] | None
- get_origin()#
- Return type:
str | None
- get_template(name)#
Return the jinja template object for a given name
- property kernel_manager: AsyncMappingKernelManager#
- property kernel_spec_manager: KernelSpecManager#
- async prepare(*, _redirect_to_login=True)#
Prepare a response.
- Return type:
Awaitable[None] | None
- render_template(name, **ns)#
Render a template by name.
- property session_manager: SessionManager#
- set_attachment_header(filename)#
Set Content-Disposition: attachment header
As a method to ensure handling of filename encoding
- Return type:
- set_cors_headers()#
Add CORS headers, if defined
Now that current_user is async (jupyter-server 2.0), must be called at the end of prepare(), instead of in set_default_headers.
- Return type:
- property terminal_manager: TerminalManager#
- class jupyter_server.base.handlers.MainHandler(application, request, **kwargs)#
Bases:
JupyterHandler
Simple handler for base_url.
- class jupyter_server.base.handlers.PrometheusMetricsHandler(application, request, **kwargs)#
Bases:
JupyterHandler
Return prometheus metrics for this server
- class jupyter_server.base.handlers.PublicStaticFileHandler(application, request, **kwargs)#
Bases:
StaticFileHandler
Same as web.StaticFileHandler, but decorated to acknowledge that auth is not required.
- class jupyter_server.base.handlers.RedirectWithParams(application, request, **kwargs)#
Bases:
RequestHandler
Same as web.RedirectHandler, but preserves URL parameters
- class jupyter_server.base.handlers.Template404(application, request, **kwargs)#
Bases:
JupyterHandler
Render our 404 template
- class jupyter_server.base.handlers.TrailingSlashHandler(application, request, **kwargs)#
Bases:
RequestHandler
Simple redirect handler that strips trailing slashes
This should be the first, highest priority handler.
- jupyter_server.base.handlers.json_errors(method)#
Decorate methods with this to return GitHub style JSON errors.
This should be used on any JSON API on any handler method that can raise HTTPErrors.
This will grab the latest HTTPError exception using sys.exc_info and then: :rtype:
Any
Set the HTTP status code based on the HTTPError
Create and return a JSON body with a message field describing the error in a human readable form.
- jupyter_server.base.handlers.json_sys_info()#
Get sys info as json.
Base websocket classes.
- class jupyter_server.base.websocket.WebSocketMixin#
Bases:
object
Mixin for common websocket options
- check_origin(origin=None)#
Check Origin == Host or Access-Control-Allow-Origin.
Tornado >= 4 calls this method automatically, raising 403 if it returns False.
- clear_cookie(*args, **kwargs)#
meaningless for websockets
- last_ping = 0.0#
- last_pong = 0.0#
- on_pong(data)#
Handle a pong message.
- open(*args, **kwargs)#
Open the websocket.
- ping_callback = None#
- property ping_interval#
The interval for websocket keep-alive pings.
Set ws_ping_interval = 0 to disable pings.
- property ping_timeout#
If no ping is received in this many milliseconds, close the websocket connection (VPNs, etc. can fail to cleanly close ws connections). Default is max of 3 pings or 30 seconds.
- prepare(*args, **kwargs)#
Handle a get request.
- send_ping()#
send a ping to keep the websocket alive
This module is deprecated in Jupyter Server 2.0
Module contents#
jupyter_server.extension package#
Submodules#
An extension application.
- class jupyter_server.extension.application.ExtensionApp(**kwargs)#
Bases:
JupyterApp
Base class for configurable Jupyter Server Extension Applications.
ExtensionApp subclasses can be initialized two ways:
Extension is listed as a jpserver_extension, and ServerApp calls its load_jupyter_server_extension classmethod. This is the classic way of loading a server extension.
Extension is launched directly by calling its
launch_instance
class method. This method can be set as a entry_point in the extensions setup.py.
- classes: ClassesType = [<class 'jupyter_server.serverapp.ServerApp'>]#
- property config_file_paths#
Look on the same path as our parent for config files
- current_activity()#
Return a list of activity happening in this extension.
- default_url#
A trait for unicode strings.
- extension_url = '/'#
- file_url_prefix#
A trait for unicode strings.
- classmethod get_extension_package()#
Get an extension package.
- classmethod get_extension_point()#
Get an extension point.
- handlers: List[tuple[t.Any, ...]]#
Handlers appended to the server.
- initialize()#
Initialize the extension app. The corresponding server app and webapp should already be initialized by this step.
Appends Handlers to the ServerApp,
Passes config and settings from ExtensionApp to the Tornado web application
Points Tornado Webapp to templates and static assets.
- initialize_handlers()#
Override this method to append handlers to a Jupyter Server.
- classmethod initialize_server(argv=None, load_other_extensions=True, **kwargs)#
Creates an instance of ServerApp and explicitly sets this extension to enabled=True (i.e. superseding disabling found in other config from files).
The
launch_instance
method uses this method to initialize and start a server.
- initialize_settings()#
Override this method to add handling of settings.
- initialize_templates()#
Override this method to add handling of template files.
- classmethod launch_instance(argv=None, **kwargs)#
Launch the extension like an application. Initializes+configs a stock server and appends the extension to the server. Then starts the server and routes to extension’s landing page.
- classmethod load_classic_server_extension(serverapp)#
Enables extension to be loaded as classic Notebook (jupyter/notebook) extension.
- load_other_extensions = True#
- classmethod make_serverapp(**kwargs)#
Instantiate the ServerApp
Override to customize the ServerApp before it loads any configuration
- Return type:
- name: str | Unicode[str, str] = 'ExtensionApp'#
- open_browser#
Whether to open in a browser after starting. The specific browser used is platform dependent and determined by the python standard library
webbrowser
module, unless it is overridden using the –browser (ServerApp.browser) configuration option.
- serverapp: ServerApp | None#
A trait which allows any value.
- serverapp_config: dict[str, t.Any] = {}#
- settings#
Settings that will passed to the server.
- start()#
Start the underlying Jupyter server.
Server should be started after extension is initialized.
- static_paths#
paths to search for serving static files.
This allows adding javascript/css to be available from the notebook server machine, or overriding individual files in the IPython
- static_url_prefix#
Url where the static assets for the extension are served.
- stop()#
Stop the underlying Jupyter server.
- async stop_extension()#
Cleanup any resources managed by this extension.
- template_paths#
Paths to search for serving jinja templates.
Can be used to override templates from notebook.templates.
- class jupyter_server.extension.application.ExtensionAppJinjaMixin(*args, **kwargs)#
Bases:
HasTraits
Use Jinja templates for HTML templates on top of an ExtensionApp.
- jinja2_options#
Options to pass to the jinja2 environment for this
- exception jupyter_server.extension.application.JupyterServerExtensionException#
Bases:
Exception
Exception class for raising for Server extensions errors.
Extension config.
- class jupyter_server.extension.config.ExtensionConfigManager(**kwargs)#
Bases:
ConfigManager
A manager class to interface with Jupyter Server Extension config found in a
config.d
folder. It is assumed that all configuration files in this directory are JSON files.- disable(name)#
Disable an extension by name.
- enable(name)#
Enable an extension by name.
- enabled(name, section_name='jupyter_server_config', include_root=True)#
Is the extension enabled?
- get_jpserver_extensions(section_name='jupyter_server_config')#
Return the jpserver_extensions field from all config files found.
An extension handler.
- class jupyter_server.extension.handler.ExtensionHandlerJinjaMixin#
Bases:
object
Mixin class for ExtensionApp handlers that use jinja templating for template rendering.
- class jupyter_server.extension.handler.ExtensionHandlerMixin#
Bases:
object
Base class for Jupyter server extension handlers.
Subclasses can serve static files behind a namespaced endpoint: “<base_url>/static/<name>/”
This allows multiple extensions to serve static files under their own namespace and avoid intercepting requests for other extensions.
- property extensionapp: ExtensionApp#
- static_url(path, include_host=None, **kwargs)#
Returns a static URL for the given relative static file path. This method requires you set the
{name}_static_path
setting in your extension (which specifies the root directory of your static files). This method returns a versioned url (by default appending?v=<signature>
), which allows the static files to be cached indefinitely. This can be disabled by passinginclude_version=False
(in the default implementation; other static file implementations are not required to support this, but they may support other options). By default this method returns URLs relative to the current host, but ifinclude_host
is true the URL returned will be absolute. If this handler has aninclude_host
attribute, that value will be used as the default for allstatic_url
calls that do not passinclude_host
as a keyword argument.- Return type:
str
The extension manager.
- class jupyter_server.extension.manager.ExtensionManager(**kwargs)#
Bases:
LoggingConfigurable
High level interface for findind, validating, linking, loading, and managing Jupyter Server extensions.
Usage: m = ExtensionManager(config_manager=…)
- add_extension(extension_name, enabled=False)#
Try to add extension to manager, return True if successful. Otherwise, return False.
- any_activity()#
Check for any activity currently happening across all extension applications.
- config_manager#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- property extension_apps#
Return mapping of extension names and sets of ExtensionApp objects.
- property extension_points#
Return mapping of extension point names and ExtensionPoint objects.
- extensions#
Dictionary with extension package names as keys and ExtensionPackage objects as values.
- from_config_manager(config_manager)#
Add extensions found by an ExtensionConfigManager
- from_jpserver_extensions(jpserver_extensions)#
Add extensions from ‘jpserver_extensions’-like dictionary.
- link_all_extensions()#
Link all enabled extensions to an instance of ServerApp
- link_extension(name)#
Link an extension by name.
- linked_extensions#
Dictionary with extension names as keys
values are True if the extension is linked, False if not.
- load_all_extensions()#
Load all enabled extensions and append them to the parent ServerApp.
- load_extension(name)#
Load an extension by name.
- serverapp#
A trait which allows any value.
- property sorted_extensions#
Returns an extensions dictionary, sorted alphabetically.
- async stop_all_extensions()#
Call the shutdown hooks in all extensions.
- async stop_extension(name, apps)#
Call the shutdown hooks in the specified apps.
- class jupyter_server.extension.manager.ExtensionPackage(**kwargs: Any)#
Bases:
LoggingConfigurable
An API for interfacing with a Jupyter Server extension package.
Usage:
ext_name = “my_extensions” extpkg = ExtensionPackage(name=ext_name)
- enabled#
Whether the extension package is enabled.
- extension_points#
An instance of a Python dict.
One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.
Changed in version 5.0: Added key_trait for validating dict keys.
Changed in version 5.0: Deprecated ambiguous
trait
,traits
args in favor ofvalue_trait
,per_key_traits
.
- link_all_points(serverapp)#
Link all extension points.
- link_point(point_name, serverapp)#
Link an extension point.
- load_all_points(serverapp)#
Load all extension points.
- load_point(point_name, serverapp)#
Load an extension point.
- metadata#
Extension metadata loaded from the extension package.
- module#
The module for this extension package. None if not enabled
- name#
Name of the an importable Python package.
- validate()#
Validate all extension points in this package.
- version#
The version of this extension package, if it can be found. Otherwise, an empty string.
- class jupyter_server.extension.manager.ExtensionPoint(*args, **kwargs)#
Bases:
HasTraits
A simple API for connecting to a Jupyter Server extension point defined by metadata and importable from a Python package.
- property app#
If the metadata includes an
app
field
- property config#
Return any configuration provided by this extension point.
- link(serverapp)#
Link the extension to a Jupyter ServerApp object.
This looks for a
_link_jupyter_server_extension
function in the extension’s module or ExtensionApp class.
- property linked#
Has this extension point been linked to the server.
Will pull from ExtensionApp’s trait, if this point is an instance of ExtensionApp.
- load(serverapp)#
Load the extension in a Jupyter ServerApp object.
This looks for a
_load_jupyter_server_extension
function in the extension’s module or ExtensionApp class.
- metadata#
An instance of a Python dict.
One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.
Changed in version 5.0: Added key_trait for validating dict keys.
Changed in version 5.0: Deprecated ambiguous
trait
,traits
args in favor ofvalue_trait
,per_key_traits
.
- property module#
The imported module (using importlib.import_module)
- property module_name#
Name of the Python package module where the extension’s _load_jupyter_server_extension can be found.
- property name#
Name of the extension.
If it’s not provided in the metadata,
name
is set to the extensions’ module name.
- validate()#
Check that both a linker and loader exists.
Utilities for installing extensions
- exception jupyter_server.extension.serverextension.ArgumentConflict#
Bases:
ValueError
- class jupyter_server.extension.serverextension.BaseExtensionApp(**kwargs)#
Bases:
JupyterApp
Base extension installer app
- aliases: StrDict = {'config': 'JupyterApp.config_file', 'log-level': 'Application.log_level'}#
- flags: StrDict = {'debug': ({'Application': {'log_level': 10}}, 'set log level to logging.DEBUG (maximize logging output)'), 'py': ({'BaseExtensionApp': {'python': True}}, 'Install from a Python package'), 'python': ({'BaseExtensionApp': {'python': True}}, 'Install from a Python package'), 'show-config': ({'Application': {'show_config': True}}, "Show the application's configuration (human-readable format)"), 'show-config-json': ({'Application': {'show_config_json': True}}, "Show the application's configuration (json format)"), 'sys-prefix': ({'BaseExtensionApp': {'sys_prefix': True}}, 'Use sys.prefix as the prefix for installing extensions (for environments, packaging)'), 'system': ({'BaseExtensionApp': {'sys_prefix': False, 'user': False}}, 'Apply the operation system-wide'), 'user': ({'BaseExtensionApp': {'user': True}}, 'Apply the operation only for the given user')}#
- python#
Install from a Python package
- sys_prefix#
Use the sys.prefix as the prefix
- user#
Whether to do a user install
- version: str | Unicode[str, str | bytes] = '2.14.0'#
- class jupyter_server.extension.serverextension.DisableServerExtensionApp(**kwargs)#
Bases:
ToggleServerExtensionApp
An App that disables Server Extensions
- description: str | Unicode[str, str | bytes] = '\n Disable a server extension in configuration.\n\n Usage\n jupyter server extension disable [--system|--sys-prefix]\n '#
- name: str | Unicode[str, str | bytes] = 'jupyter server extension disable'#
- class jupyter_server.extension.serverextension.EnableServerExtensionApp(**kwargs)#
Bases:
ToggleServerExtensionApp
An App that enables (and validates) Server Extensions
- description: str | Unicode[str, str | bytes] = '\n Enable a server extension in configuration.\n\n Usage\n jupyter server extension enable [--system|--sys-prefix]\n '#
- name: str | Unicode[str, str | bytes] = 'jupyter server extension enable'#
- class jupyter_server.extension.serverextension.ListServerExtensionsApp(**kwargs)#
Bases:
BaseExtensionApp
An App that lists (and validates) Server Extensions
- description: str | Unicode[str, str | bytes] = 'List all server extensions known by the configuration system'#
- list_server_extensions()#
List all enabled and disabled server extensions, by config path
Enabled extensions are validated, potentially generating warnings.
- Return type:
- name: str | Unicode[str, str | bytes] = 'jupyter server extension list'#
- version: str | Unicode[str, str | bytes] = '2.14.0'#
- class jupyter_server.extension.serverextension.ServerExtensionApp(**kwargs)#
Bases:
BaseExtensionApp
Root level server extension app
- description: str = 'Work with Jupyter server extensions'#
- examples: str | Unicode[str, str | bytes] = '\njupyter server extension list # list all configured server extensions\njupyter server extension enable --py <packagename> # enable all server extensions in a Python package\njupyter server extension disable --py <packagename> # disable all server extensions in a Python package\n'#
- name: str | Unicode[str, str | bytes] = 'jupyter server extension'#
- subcommands: dict[str, t.Any] = {'disable': (<class 'jupyter_server.extension.serverextension.DisableServerExtensionApp'>, 'Disable a server extension'), 'enable': (<class 'jupyter_server.extension.serverextension.EnableServerExtensionApp'>, 'Enable a server extension'), 'list': (<class 'jupyter_server.extension.serverextension.ListServerExtensionsApp'>, 'List server extensions')}#
- version: str | Unicode[str, str | bytes] = '2.14.0'#
- class jupyter_server.extension.serverextension.ToggleServerExtensionApp(**kwargs)#
Bases:
BaseExtensionApp
A base class for enabling/disabling extensions
- description: str | Unicode[str, str | bytes] = 'Enable/disable a server extension using frontend configuration files.'#
- flags: StrDict = {'debug': ({'Application': {'log_level': 10}}, 'set log level to logging.DEBUG (maximize logging output)'), 'py': ({'ToggleServerExtensionApp': {'python': True}}, 'Install from a Python package'), 'python': ({'ToggleServerExtensionApp': {'python': True}}, 'Install from a Python package'), 'show-config': ({'Application': {'show_config': True}}, "Show the application's configuration (human-readable format)"), 'show-config-json': ({'Application': {'show_config_json': True}}, "Show the application's configuration (json format)"), 'sys-prefix': ({'ToggleServerExtensionApp': {'sys_prefix': True}}, 'Use sys.prefix as the prefix for installing server extensions'), 'system': ({'ToggleServerExtensionApp': {'sys_prefix': False, 'user': False}}, 'Perform the operation system-wide'), 'user': ({'ToggleServerExtensionApp': {'user': True}}, 'Perform the operation for the current user')}#
- name: str | Unicode[str, str | bytes] = 'jupyter server extension enable/disable'#
- jupyter_server.extension.serverextension.toggle_server_extension_python(import_name, enabled=None, parent=None, user=False, sys_prefix=True)#
Toggle the boolean setting for a given server extension in a Jupyter config file.
- Return type:
None
Extension utilities.
- exception jupyter_server.extension.utils.ExtensionLoadingError#
Bases:
Exception
An extension loading error.
- exception jupyter_server.extension.utils.ExtensionMetadataError#
Bases:
Exception
An extension metadata error.
- exception jupyter_server.extension.utils.ExtensionModuleNotFound#
Bases:
Exception
An extension module not found error.
- exception jupyter_server.extension.utils.NotAnExtensionApp#
Bases:
Exception
An error raised when a module is not an extension.
- jupyter_server.extension.utils.get_loader(obj, logger=None)#
Looks for _load_jupyter_server_extension as an attribute of the object or module.
Adds backwards compatibility for old function name missing the underscore prefix.
- jupyter_server.extension.utils.get_metadata(package_name, logger=None)#
Find the extension metadata from an extension package.
This looks for a
_jupyter_server_extension_points
function that returns metadata about all extension points within a Jupyter Server Extension package.If it doesn’t exist, return a basic metadata packet given the module name.
- jupyter_server.extension.utils.validate_extension(name)#
Raises an exception is the extension is missing a needed hook or metadata field. An extension is valid if: 1) name is an importable Python package. 1) the package has a _jupyter_server_extension_points function 2) each extension path has a _load_jupyter_server_extension function
If this works, nothing should happen.
Module contents#
jupyter_server.files package#
Submodules#
Serve files directly from the ContentsManager.
- class jupyter_server.files.handlers.FilesHandler(application, request, **kwargs)#
Bases:
JupyterHandler
,StaticFileHandler
serve files via ContentsManager
Normally used when ContentsManager is not a FileContentsManager.
FileContentsManager subclasses use AuthenticatedFilesHandler by default, a subclass of StaticFileHandler.
- auth_resource = 'contents'#
- property content_security_policy#
The content security policy.
- get(path, include_body=True)#
Get a file by path.
- head(path)#
The head response.
- Return type:
Awaitable[None] | None
Module contents#
jupyter_server.gateway package#
Submodules#
Gateway connection classes.
- class jupyter_server.gateway.connections.GatewayWebSocketConnection(**kwargs)#
Bases:
BaseKernelWebsocketConnection
Web socket connection that proxies to a kernel/enterprise gateway.
- async connect()#
Connect to the socket.
- disconnect()#
Handle a disconnect.
- disconnected#
A boolean (True, False) trait.
- handle_outgoing_message(incoming_msg, *args)#
Send message to the notebook client.
- Return type:
- kernel_ws_protocol#
A trait for unicode strings.
- retry#
An int trait.
- ws#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- ws_future#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
A kernel gateway client.
- class jupyter_server.gateway.gateway_client.GatewayClient(**kwargs: Any)#
Bases:
SingletonConfigurable
This class manages the configuration. It’s its own singleton class so that we can share these values across all objects. It also contains some options. helper methods to build request arguments out of the various config
- KERNEL_LAUNCH_TIMEOUT = 40#
- accept_cookies#
Accept and manage cookies sent by the service side. This is often useful for load balancers to decide which backend node to use. (JUPYTER_GATEWAY_ACCEPT_COOKIES env var)
- accept_cookies_env = 'JUPYTER_GATEWAY_ACCEPT_COOKIES'#
- accept_cookies_value = False#
- allowed_envs#
A comma-separated list of environment variable names that will be included, along with their values, in the kernel startup request. The corresponding
client_envs
configuration value must also be set on the Gateway server - since that configuration value indicates which environmental values to make available to the kernel. (JUPYTER_GATEWAY_ALLOWED_ENVS env var)
- allowed_envs_default_value = ''#
- allowed_envs_env = 'JUPYTER_GATEWAY_ALLOWED_ENVS'#
- auth_header_key#
The authorization header’s key name (typically ‘Authorization’) used in the HTTP headers. The header will be formatted as:
{'{auth_header_key}': '{auth_scheme} {auth_token}'}
If the authorization header key takes a single value,
auth_scheme
should be set to None and ‘auth_token’ should be configured to use the appropriate value.(JUPYTER_GATEWAY_AUTH_HEADER_KEY env var)
- auth_header_key_default_value = 'Authorization'#
- auth_header_key_env = 'JUPYTER_GATEWAY_AUTH_HEADER_KEY'#
- auth_scheme#
The auth scheme, added as a prefix to the authorization token used in the HTTP headers. (JUPYTER_GATEWAY_AUTH_SCHEME env var)
- auth_scheme_default_value = 'token'#
- auth_scheme_env = 'JUPYTER_GATEWAY_AUTH_SCHEME'#
- auth_token#
The authorization token used in the HTTP headers. The header will be formatted as:
{'{auth_header_key}': '{auth_scheme} {auth_token}'} (JUPYTER_GATEWAY_AUTH_TOKEN env var)
- auth_token_default_value = ''#
- auth_token_env = 'JUPYTER_GATEWAY_AUTH_TOKEN'#
- ca_certs#
The filename of CA certificates or None to use defaults. (JUPYTER_GATEWAY_CA_CERTS env var)
- ca_certs_env = 'JUPYTER_GATEWAY_CA_CERTS'#
- client_cert#
The filename for client SSL certificate, if any. (JUPYTER_GATEWAY_CLIENT_CERT env var)
- client_cert_env = 'JUPYTER_GATEWAY_CLIENT_CERT'#
- client_key#
The filename for client SSL key, if any. (JUPYTER_GATEWAY_CLIENT_KEY env var)
- client_key_env = 'JUPYTER_GATEWAY_CLIENT_KEY'#
- connect_timeout#
The time allowed for HTTP connection establishment with the Gateway server. (JUPYTER_GATEWAY_CONNECT_TIMEOUT env var)
- connect_timeout_default_value = 40.0#
- connect_timeout_env = 'JUPYTER_GATEWAY_CONNECT_TIMEOUT'#
- emit(data)#
Emit event using the core event schema from Jupyter Server’s Gateway Client.
- env_whitelist#
Deprecated, use
GatewayClient.allowed_envs
- event_logger#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- event_schema_id = 'https://events.jupyter.org/jupyter_server/gateway_client/v1'#
- property gateway_enabled#
- gateway_retry_interval#
The time allowed for HTTP reconnection with the Gateway server for the first time. Next will be JUPYTER_GATEWAY_RETRY_INTERVAL multiplied by two in factor of numbers of retries but less than JUPYTER_GATEWAY_RETRY_INTERVAL_MAX. (JUPYTER_GATEWAY_RETRY_INTERVAL env var)
- gateway_retry_interval_default_value = 1.0#
- gateway_retry_interval_env = 'JUPYTER_GATEWAY_RETRY_INTERVAL'#
- gateway_retry_interval_max#
The maximum time allowed for HTTP reconnection retry with the Gateway server. (JUPYTER_GATEWAY_RETRY_INTERVAL_MAX env var)
- gateway_retry_interval_max_default_value = 30.0#
- gateway_retry_interval_max_env = 'JUPYTER_GATEWAY_RETRY_INTERVAL_MAX'#
- gateway_retry_max#
The maximum retries allowed for HTTP reconnection with the Gateway server. (JUPYTER_GATEWAY_RETRY_MAX env var)
- gateway_retry_max_default_value = 5#
- gateway_retry_max_env = 'JUPYTER_GATEWAY_RETRY_MAX'#
- gateway_token_renewer: GatewayTokenRenewerBase#
- gateway_token_renewer_class#
The class to use for Gateway token renewal. (JUPYTER_GATEWAY_TOKEN_RENEWER_CLASS env var)
- gateway_token_renewer_class_default_value = 'jupyter_server.gateway.gateway_client.NoOpTokenRenewer'#
- gateway_token_renewer_class_env = 'JUPYTER_GATEWAY_TOKEN_RENEWER_CLASS'#
- headers#
Additional HTTP headers to pass on the request. This value will be converted to a dict. (JUPYTER_GATEWAY_HEADERS env var)
- headers_default_value = '{}'#
- headers_env = 'JUPYTER_GATEWAY_HEADERS'#
- http_pwd#
The password for HTTP authentication. (JUPYTER_GATEWAY_HTTP_PWD env var)
- http_pwd_env = 'JUPYTER_GATEWAY_HTTP_PWD'#
- http_user#
The username for HTTP authentication. (JUPYTER_GATEWAY_HTTP_USER env var)
- http_user_env = 'JUPYTER_GATEWAY_HTTP_USER'#
- init_connection_args()#
Initialize arguments used on every request. Since these are primarily static values, we’ll perform this operation once.
- kernels_endpoint#
The gateway API endpoint for accessing kernel resources (JUPYTER_GATEWAY_KERNELS_ENDPOINT env var)
- kernels_endpoint_default_value = '/api/kernels'#
- kernels_endpoint_env = 'JUPYTER_GATEWAY_KERNELS_ENDPOINT'#
- kernelspecs_endpoint#
The gateway API endpoint for accessing kernelspecs (JUPYTER_GATEWAY_KERNELSPECS_ENDPOINT env var)
- kernelspecs_endpoint_default_value = '/api/kernelspecs'#
- kernelspecs_endpoint_env = 'JUPYTER_GATEWAY_KERNELSPECS_ENDPOINT'#
- kernelspecs_resource_endpoint#
The gateway endpoint for accessing kernelspecs resources (JUPYTER_GATEWAY_KERNELSPECS_RESOURCE_ENDPOINT env var)
- kernelspecs_resource_endpoint_default_value = '/kernelspecs'#
- kernelspecs_resource_endpoint_env = 'JUPYTER_GATEWAY_KERNELSPECS_RESOURCE_ENDPOINT'#
- launch_timeout_pad#
Timeout pad to be ensured between KERNEL_LAUNCH_TIMEOUT and request_timeout such that request_timeout >= KERNEL_LAUNCH_TIMEOUT + launch_timeout_pad. (JUPYTER_GATEWAY_LAUNCH_TIMEOUT_PAD env var)
- launch_timeout_pad_default_value = 2.0#
- launch_timeout_pad_env = 'JUPYTER_GATEWAY_LAUNCH_TIMEOUT_PAD'#
- load_connection_args(**kwargs)#
Merges the static args relative to the connection, with the given keyword arguments. If static args have yet to be initialized, we’ll do that here.
- request_timeout#
The time allowed for HTTP request completion. (JUPYTER_GATEWAY_REQUEST_TIMEOUT env var)
- request_timeout_default_value = 42.0#
- request_timeout_env = 'JUPYTER_GATEWAY_REQUEST_TIMEOUT'#
- url#
The url of the Kernel or Enterprise Gateway server where kernel specifications are defined and kernel management takes place. If defined, this Notebook server acts as a proxy for all kernel management and kernel specification retrieval. (JUPYTER_GATEWAY_URL env var)
- url_env = 'JUPYTER_GATEWAY_URL'#
- validate_cert#
For HTTPS requests, determines if server’s certificate should be validated or not. (JUPYTER_GATEWAY_VALIDATE_CERT env var)
- validate_cert_default_value = True#
- validate_cert_env = 'JUPYTER_GATEWAY_VALIDATE_CERT'#
- ws_url#
The websocket url of the Kernel or Enterprise Gateway server. If not provided, this value will correspond to the value of the Gateway url with ‘ws’ in place of ‘http’. (JUPYTER_GATEWAY_WS_URL env var)
- ws_url_env = 'JUPYTER_GATEWAY_WS_URL'#
- class jupyter_server.gateway.gateway_client.GatewayTokenRenewerBase(**kwargs)#
Bases:
ABC
,LoggingConfigurable
Abstract base class for refreshing tokens used between this server and a Gateway server. Implementations requiring additional configuration can extend their class with appropriate configuration values or convey those values via appropriate environment variables relative to the implementation.
- class jupyter_server.gateway.gateway_client.GatewayTokenRenewerMeta(name, bases, classdict, **kwds)#
Bases:
ABCMeta
,MetaHasTraits
The metaclass necessary for proper ABC behavior in a Configurable.
- class jupyter_server.gateway.gateway_client.NoOpTokenRenewer(**kwargs)#
Bases:
GatewayTokenRenewerBase
NoOpTokenRenewer is the default value to the GatewayClient trait
gateway_token_renewer
and merely returns the provided token.
- class jupyter_server.gateway.gateway_client.RetryableHTTPClient#
Bases:
object
Inspired by urllib.util.Retry (https://urllib3.readthedocs.io/en/stable/reference/urllib3.util.html), this class is initialized with desired retry characteristics, uses a recursive method
fetch()
against an instance ofAsyncHTTPClient
which tracks the current retry count across applicable request retries.- MAX_RETRIES_CAP = 10#
- MAX_RETRIES_DEFAULT = 2#
- async fetch(endpoint, **kwargs)#
Retryable AsyncHTTPClient.fetch() method. When the request fails, this method will recurse up to max_retries times if the condition deserves a retry.
- Return type:
- async jupyter_server.gateway.gateway_client.gateway_request(endpoint, **kwargs)#
Make an async request to kernel gateway endpoint, returns a response
- Return type:
Gateway API handlers.
- class jupyter_server.gateway.handlers.GatewayResourceHandler(application, request, **kwargs)#
Bases:
APIHandler
Retrieves resources for specific kernelspec definitions from kernel/enterprise gateway.
- get(kernel_name, path, include_body=True)#
Get a gateway resource by name and path.
- class jupyter_server.gateway.handlers.GatewayWebSocketClient(**kwargs: Any)#
Bases:
LoggingConfigurable
Proxy web socket connection to a kernel/enterprise gateway.
- on_close()#
Web socket closed event.
- on_message(message)#
Send message to gateway server.
- on_open(kernel_id, message_callback, **kwargs)#
Web socket connection open against gateway server.
- class jupyter_server.gateway.handlers.WebSocketChannelsHandler(application, request, **kwargs)#
Bases:
WebSocketHandler
,JupyterHandler
Gateway web socket channels handler.
- authenticate()#
Run before finishing the GET request
Extend this method to add logic that should fire before the websocket finishes completing.
- check_origin(origin=None)#
Check origin for the socket.
- gateway = None#
- async get(kernel_id, *args, **kwargs)#
Get the socket.
- get_compression_options()#
Get the compression options for the socket.
- initialize()#
Initialize the socket.
- kernel_id = None#
- on_close()#
Handle a closing socket.
- on_message(message)#
Forward message to gateway web socket handler.
- open(kernel_id, *args, **kwargs)#
Handle web socket connection open to notebook server and delegate to gateway web socket handler
- ping_callback = None#
- send_ping()#
Send a ping to the socket.
- session = None#
- set_default_headers()#
Undo the set_default_headers in JupyterHandler which doesn’t make sense for websockets
- write_message(message, binary=False)#
Send message back to notebook client. This is called via callback from self.gateway._read_messages.
Kernel gateway managers.
- class jupyter_server.gateway.managers.ChannelQueue(channel_name, channel_socket, log)#
Bases:
Queue
A queue for a named channel.
- static serialize_datetime(dt)#
Serialize a datetime object.
- class jupyter_server.gateway.managers.GatewayKernelClient(**kwargs: Any)#
Bases:
AsyncKernelClient
Communicates with a single kernel indirectly via a websocket to a gateway server.
There are five channels associated with each kernel:
shell: for request/reply calls to the kernel.
iopub: for the kernel to publish results to frontends.
hb: for monitoring the kernel’s heartbeat.
stdin: for frontends to reply to raw_input calls in the kernel.
control: for kernel management calls to the kernel.
The messages that can be sent on these channels are exposed as methods of the client (KernelClient.execute, complete, history, etc.). These methods only send the message, they don’t wait for a reply. To get results, use e.g.
get_shell_msg()
to fetch messages from the shell channel.- allow_stdin: bool = False#
- property control_channel#
Get the control channel object for this kernel.
- property hb_channel#
Get the hb channel object for this kernel.
- property iopub_channel#
Get the iopub channel object for this kernel.
- property shell_channel#
Get the shell channel object for this kernel.
- async start_channels(shell=True, iopub=True, stdin=True, hb=True, control=True)#
Starts the channels for this kernel.
For this class, we establish a websocket connection to the destination and set up the channel-based queues on which applicable messages will be posted.
- property stdin_channel#
Get the stdin channel object for this kernel.
- stop_channels()#
Stops all the running channels for this kernel.
For this class, we close the websocket connection and destroy the channel-based queues.
- class jupyter_server.gateway.managers.GatewayKernelManager(**kwargs: Any)#
Bases:
ServerKernelManager
Manages a single kernel remotely via a Gateway Server.
- cleanup_resources(restart=False)#
Clean up resources when the kernel is shut down
- client(**kwargs)#
Create a client configured to connect to our kernel
- client_class: DottedObjectName#
A string holding a valid dotted object name in Python, such as A.b3._c
- client_factory: Type#
A trait whose value must be a subclass of a specified class.
- property has_kernel#
Has a kernel been started that we are managing.
- async interrupt_kernel()#
Interrupts the kernel via an HTTP request.
- async is_alive()#
Is the kernel process still running?
- kernel = None#
- kernel_id: Optional[str] = None#
- async refresh_model(model=None)#
Refresh the kernel model.
- Parameters:
model (dict) – The model from which to refresh the kernel. If None, the kernel model is fetched from the Gateway server.
- async restart_kernel(**kw)#
Restarts a kernel via HTTP.
- async shutdown_kernel(now=False, restart=False)#
Attempts to stop the kernel process cleanly via HTTP.
- async start_kernel(**kwargs)#
Starts a kernel via HTTP in an asynchronous manner.
- Parameters:
**kwargs (optional) – keyword arguments that are passed down to build the kernel_cmd and launching the kernel (e.g. Popen kwargs).
- class jupyter_server.gateway.managers.GatewayKernelSpecManager(**kwargs: Any)#
Bases:
KernelSpecManager
A gateway kernel spec manager.
- async get_all_specs()#
Get all of the kernel specs for the gateway.
- async get_kernel_spec(kernel_name, **kwargs)#
Get kernel spec for kernel_name.
- Parameters:
kernel_name (str) – The name of the kernel.
- async get_kernel_spec_resource(kernel_name, path)#
Get kernel spec for kernel_name.
- async list_kernel_specs()#
Get a list of kernel specs.
- class jupyter_server.gateway.managers.GatewayMappingKernelManager(**kwargs: Any)#
Bases:
AsyncMappingKernelManager
Kernel manager that supports remote kernels hosted by Jupyter Kernel or Enterprise Gateway.
- async cull_kernels()#
Override cull_kernels, so we can be sure their state is current.
- async interrupt_kernel(kernel_id, **kwargs)#
Interrupt a kernel by its kernel uuid.
- Parameters:
kernel_id (uuid) – The id of the kernel to interrupt.
- async kernel_model(kernel_id)#
Return a dictionary of kernel information described in the JSON standard model.
- Parameters:
kernel_id (uuid) – The uuid of the kernel.
- async list_kernels(**kwargs)#
Get a list of running kernels from the Gateway server.
We’ll use this opportunity to refresh the models in each of the kernels we’re managing.
- remove_kernel(kernel_id)#
Complete override since we want to be more tolerant of missing keys
- async restart_kernel(kernel_id, now=False, **kwargs)#
Restart a kernel by its kernel uuid.
- Parameters:
kernel_id (uuid) – The id of the kernel to restart.
- async shutdown_all(now=False)#
Shutdown all kernels.
- async shutdown_kernel(kernel_id, now=False, restart=False)#
Shutdown a kernel by its kernel uuid.
- async start_kernel(*, kernel_id=None, path=None, **kwargs)#
Start a kernel for a session and return its kernel_id.
- Parameters:
kernel_id (uuid) – The uuid to associate the new kernel with. If this is not None, this kernel will be persistent whenever it is requested.
path (API path) – The API path (unicode, ‘/’ delimited) for the cwd. Will be transformed to an OS path relative to root_dir.
- class jupyter_server.gateway.managers.GatewaySessionManager(**kwargs: Any)#
Bases:
SessionManager
A gateway session manager.
- async kernel_culled(kernel_id)#
Checks if the kernel is still considered alive and returns true if it’s not found.
- Return type:
- kernel_manager#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
Module contents#
jupyter_server.i18n package#
Module contents#
Server functions for loading translations
- jupyter_server.i18n.cached_load(language, domain='nbjs')#
Load translations for one language, using in-memory cache if available
- jupyter_server.i18n.combine_translations(accept_language, domain='nbjs')#
Combine translations for multiple accepted languages.
Returns data re-packaged in jed1.x format.
- jupyter_server.i18n.load(language, domain='nbjs')#
Load translations from an nbjs.json file
- jupyter_server.i18n.parse_accept_lang_header(accept_lang)#
Parses the ‘Accept-Language’ HTTP header.
Returns a list of language codes in ascending order of preference (with the most preferred language last).
jupyter_server.kernelspecs package#
Submodules#
Kernelspecs API Handlers.
- class jupyter_server.kernelspecs.handlers.KernelSpecResourceHandler(application, request, **kwargs)#
Bases:
StaticFileHandler
,JupyterHandler
A Kernelspec resource handler.
- SUPPORTED_METHODS = ('GET', 'HEAD')#
- auth_resource = 'kernelspecs'#
- get(kernel_name, path, include_body=True)#
Get a kernelspec resource.
- head(kernel_name, path)#
Get the head info for a kernel resource.
- initialize()#
Initialize a kernelspec resource handler.
Module contents#
jupyter_server.nbconvert package#
Submodules#
Tornado handlers for nbconvert.
- class jupyter_server.nbconvert.handlers.NbconvertFileHandler(application, request, **kwargs)#
Bases:
JupyterHandler
An nbconvert file handler.
- SUPPORTED_METHODS = ('GET',)#
- auth_resource = 'nbconvert'#
- get(format, path)#
Get a notebook file in a desired format.
- class jupyter_server.nbconvert.handlers.NbconvertPostHandler(application, request, **kwargs)#
Bases:
JupyterHandler
An nbconvert post handler.
- SUPPORTED_METHODS = ('POST',)#
- auth_resource = 'nbconvert'#
- post(format)#
Convert a notebook file to a desired format.
- jupyter_server.nbconvert.handlers.find_resource_files(output_files_dir)#
Find the resource files in a directory.
- jupyter_server.nbconvert.handlers.get_exporter(format, **kwargs)#
get an exporter, raising appropriate errors
- jupyter_server.nbconvert.handlers.respond_zip(handler, name, output, resources)#
Zip up the output and resource files and respond with the zip file.
Returns True if it has served a zip file, False if there are no resource files, in which case we serve the plain output file.
Module contents#
jupyter_server.prometheus package#
Submodules#
Log functions for prometheus
- jupyter_server.prometheus.log_functions.prometheus_log_method(handler)#
Tornado log handler for recording RED metrics.
- We record the following metrics:
Rate - the number of requests, per second, your services are serving. Errors - the number of failed requests per second. Duration - The amount of time each request takes expressed as a time interval.
We use a fully qualified name of the handler as a label, rather than every url path to reduce cardinality.
This function should be either the value of or called from a function that is the ‘log_function’ tornado setting. This makes it get called at the end of every request, allowing us to record the metrics we need.
Prometheus metrics exported by Jupyter Server
Read https://prometheus.io/docs/practices/naming/ for naming conventions for metrics & labels.
Module contents#
jupyter_server.services package#
Subpackages#
jupyter_server.services.api package#
Submodules#
Tornado handlers for api specifications.
- class jupyter_server.services.api.handlers.APISpecHandler(application, request, **kwargs)#
Bases:
StaticFileHandler
,JupyterHandler
A spec handler for the REST API.
- auth_resource = 'api'#
- get()#
Get the API spec.
- get_content_type()#
Get the content type.
- head()#
- initialize()#
Initialize the API spec handler.
- class jupyter_server.services.api.handlers.APIStatusHandler(application, request, **kwargs)#
Bases:
APIHandler
An API status handler.
- auth_resource = 'api'#
- get()#
Get the API status.
- class jupyter_server.services.api.handlers.IdentityHandler(application, request, **kwargs)#
Bases:
APIHandler
Get the current user’s identity model
- get()#
Get the identity model.
Module contents#
jupyter_server.services.config package#
Submodules#
Tornado handlers for frontend config storage.
- class jupyter_server.services.config.handlers.ConfigHandler(application, request, **kwargs)#
Bases:
APIHandler
A config API handler.
- auth_resource = 'config'#
- get(section_name)#
Get config by section name.
- patch(section_name)#
Update a config section by name.
- put(section_name)#
Set a config section by name.
Manager to read and modify frontend config data in JSON files.
- class jupyter_server.services.config.manager.ConfigManager(**kwargs)#
Bases:
LoggingConfigurable
Config Manager used for storing frontend config
- config_dir_name#
Name of the config directory.
- get(section_name)#
Get the config from all config sections.
- read_config_path#
An instance of a Python list.
- set(section_name, data)#
Set the config only to the user’s config.
- update(section_name, new_data)#
Update the config only to the user’s config.
- write_config_dir#
A trait for unicode strings.
- write_config_manager#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
Module contents#
- class jupyter_server.services.config.ConfigManager(**kwargs)#
Bases:
LoggingConfigurable
Config Manager used for storing frontend config
- config_dir_name#
Name of the config directory.
- get(section_name)#
Get the config from all config sections.
- read_config_path#
An instance of a Python list.
- set(section_name, data)#
Set the config only to the user’s config.
- update(section_name, new_data)#
Update the config only to the user’s config.
- write_config_dir#
A trait for unicode strings.
- write_config_manager#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
jupyter_server.services.contents package#
Submodules#
Classes for managing Checkpoints.
- class jupyter_server.services.contents.checkpoints.AsyncCheckpoints(**kwargs)#
Bases:
Checkpoints
Base class for managing checkpoints for a ContentsManager asynchronously.
- async create_checkpoint(contents_mgr, path)#
Create a checkpoint.
- async delete_all_checkpoints(path)#
Delete all checkpoints for the given path.
- async delete_checkpoint(checkpoint_id, path)#
delete a checkpoint for a file
- async list_checkpoints(path)#
Return a list of checkpoints for a given file
- async rename_all_checkpoints(old_path, new_path)#
Rename all checkpoints for old_path to new_path.
- async rename_checkpoint(checkpoint_id, old_path, new_path)#
Rename a single checkpoint from old_path to new_path.
- async restore_checkpoint(contents_mgr, checkpoint_id, path)#
Restore a checkpoint
- class jupyter_server.services.contents.checkpoints.AsyncGenericCheckpointsMixin#
Bases:
GenericCheckpointsMixin
Helper for creating Asynchronous Checkpoints subclasses that can be used with any ContentsManager.
- async create_checkpoint(contents_mgr, path)#
- async create_file_checkpoint(content, format, path)#
Create a checkpoint of the current state of a file
Returns a checkpoint model for the new checkpoint.
- async create_notebook_checkpoint(nb, path)#
Create a checkpoint of the current state of a file
Returns a checkpoint model for the new checkpoint.
- async get_file_checkpoint(checkpoint_id, path)#
Get the content of a checkpoint for a non-notebook file.
Returns a dict of the form:
{ 'type': 'file', 'content': <str>, 'format': {'text','base64'}, }
- async get_notebook_checkpoint(checkpoint_id, path)#
Get the content of a checkpoint for a notebook.
Returns a dict of the form:
{ 'type': 'notebook', 'content': <output of nbformat.read>, }
- async restore_checkpoint(contents_mgr, checkpoint_id, path)#
Restore a checkpoint.
- class jupyter_server.services.contents.checkpoints.Checkpoints(**kwargs)#
Bases:
LoggingConfigurable
Base class for managing checkpoints for a ContentsManager.
Subclasses are required to implement:
create_checkpoint(self, contents_mgr, path) restore_checkpoint(self, contents_mgr, checkpoint_id, path) rename_checkpoint(self, checkpoint_id, old_path, new_path) delete_checkpoint(self, checkpoint_id, path) list_checkpoints(self, path)
- create_checkpoint(contents_mgr, path)#
Create a checkpoint.
- delete_all_checkpoints(path)#
Delete all checkpoints for the given path.
- delete_checkpoint(checkpoint_id, path)#
delete a checkpoint for a file
- list_checkpoints(path)#
Return a list of checkpoints for a given file
- rename_all_checkpoints(old_path, new_path)#
Rename all checkpoints for old_path to new_path.
- rename_checkpoint(checkpoint_id, old_path, new_path)#
Rename a single checkpoint from old_path to new_path.
- restore_checkpoint(contents_mgr, checkpoint_id, path)#
Restore a checkpoint
- class jupyter_server.services.contents.checkpoints.GenericCheckpointsMixin#
Bases:
object
Helper for creating Checkpoints subclasses that can be used with any ContentsManager.
Provides a ContentsManager-agnostic implementation of
create_checkpoint
andrestore_checkpoint
in terms of the following operations:create_file_checkpoint(self, content, format, path)
create_notebook_checkpoint(self, nb, path)
get_file_checkpoint(self, checkpoint_id, path)
get_notebook_checkpoint(self, checkpoint_id, path)
To create a generic CheckpointManager, add this mixin to a class that implement the above four methods plus the remaining Checkpoints API methods:
delete_checkpoint(self, checkpoint_id, path)
list_checkpoints(self, path)
rename_checkpoint(self, checkpoint_id, old_path, new_path)
- create_checkpoint(contents_mgr, path)#
- create_file_checkpoint(content, format, path)#
Create a checkpoint of the current state of a file
Returns a checkpoint model for the new checkpoint.
- create_notebook_checkpoint(nb, path)#
Create a checkpoint of the current state of a file
Returns a checkpoint model for the new checkpoint.
- get_file_checkpoint(checkpoint_id, path)#
Get the content of a checkpoint for a non-notebook file.
Returns a dict of the form:
{ 'type': 'file', 'content': <str>, 'format': {'text','base64'}, }
- get_notebook_checkpoint(checkpoint_id, path)#
Get the content of a checkpoint for a notebook.
Returns a dict of the form:
{ 'type': 'notebook', 'content': <output of nbformat.read>, }
- restore_checkpoint(contents_mgr, checkpoint_id, path)#
Restore a checkpoint.
File-based Checkpoints implementations.
- class jupyter_server.services.contents.filecheckpoints.AsyncFileCheckpoints(**kwargs)#
Bases:
FileCheckpoints
,AsyncFileManagerMixin
,AsyncCheckpoints
- async checkpoint_model(checkpoint_id, os_path)#
construct the info dict for a given checkpoint
- async create_checkpoint(contents_mgr, path)#
Create a checkpoint.
- async delete_checkpoint(checkpoint_id, path)#
delete a file’s checkpoint
- async list_checkpoints(path)#
list the checkpoints for a given file
This contents manager currently only supports one checkpoint per file.
- async rename_checkpoint(checkpoint_id, old_path, new_path)#
Rename a checkpoint from old_path to new_path.
- async restore_checkpoint(contents_mgr, checkpoint_id, path)#
Restore a checkpoint.
- class jupyter_server.services.contents.filecheckpoints.AsyncGenericFileCheckpoints(**kwargs)#
Bases:
AsyncGenericCheckpointsMixin
,AsyncFileCheckpoints
Asynchronous Local filesystem Checkpoints that works with any conforming ContentsManager.
- async create_file_checkpoint(content, format, path)#
Create a checkpoint from the current content of a file.
- async create_notebook_checkpoint(nb, path)#
Create a checkpoint from the current content of a notebook.
- async get_file_checkpoint(checkpoint_id, path)#
Get a checkpoint for a file.
- async get_notebook_checkpoint(checkpoint_id, path)#
Get a checkpoint for a notebook.
- class jupyter_server.services.contents.filecheckpoints.FileCheckpoints(**kwargs)#
Bases:
FileManagerMixin
,Checkpoints
A Checkpoints that caches checkpoints for files in adjacent directories.
Only works with FileContentsManager. Use GenericFileCheckpoints if you want file-based checkpoints with another ContentsManager.
- checkpoint_dir#
The directory name in which to keep file checkpoints
This is a path relative to the file’s own directory.
By default, it is .ipynb_checkpoints
- checkpoint_model(checkpoint_id, os_path)#
construct the info dict for a given checkpoint
- checkpoint_path(checkpoint_id, path)#
find the path to a checkpoint
- create_checkpoint(contents_mgr, path)#
Create a checkpoint.
- delete_checkpoint(checkpoint_id, path)#
delete a file’s checkpoint
- list_checkpoints(path)#
list the checkpoints for a given file
This contents manager currently only supports one checkpoint per file.
- no_such_checkpoint(path, checkpoint_id)#
- rename_checkpoint(checkpoint_id, old_path, new_path)#
Rename a checkpoint from old_path to new_path.
- restore_checkpoint(contents_mgr, checkpoint_id, path)#
Restore a checkpoint.
- root_dir#
A trait for unicode strings.
- class jupyter_server.services.contents.filecheckpoints.GenericFileCheckpoints(**kwargs)#
Bases:
GenericCheckpointsMixin
,FileCheckpoints
Local filesystem Checkpoints that works with any conforming ContentsManager.
- create_file_checkpoint(content, format, path)#
Create a checkpoint from the current content of a file.
- create_notebook_checkpoint(nb, path)#
Create a checkpoint from the current content of a notebook.
- get_file_checkpoint(checkpoint_id, path)#
Get a checkpoint for a file.
- get_notebook_checkpoint(checkpoint_id, path)#
Get a checkpoint for a notebook.
Utilities for file-based Contents/Checkpoints managers.
- class jupyter_server.services.contents.fileio.AsyncFileManagerMixin(**kwargs)#
Bases:
FileManagerMixin
Mixin for ContentsAPI classes that interact with the filesystem asynchronously.
- class jupyter_server.services.contents.fileio.FileManagerMixin(**kwargs)#
Bases:
LoggingConfigurable
,Configurable
Mixin for ContentsAPI classes that interact with the filesystem.
Provides facilities for reading, writing, and copying files.
Shared by FileContentsManager and FileCheckpoints.
Note
Classes using this mixin must provide the following attributes:
- root_dirunicode
A directory against against which API-style paths are to be resolved.
log : logging.Logger
- atomic_writing(os_path, *args, **kwargs)#
wrapper around atomic_writing that turns permission errors to 403. Depending on flag ‘use_atomic_writing’, the wrapper perform an actual atomic writing or simply writes the file (whatever an old exists or not)
- hash_algorithm#
Hash algorithm to use for file content, support by hashlib
- open(os_path, *args, **kwargs)#
wrapper around io.open that turns permission errors into 403
- perm_to_403(os_path='')#
context manager for turning permission errors into 403.
- use_atomic_writing#
By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones. This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )
- async jupyter_server.services.contents.fileio.async_copy2_safe(src, dst, log=None)#
copy src to dst asynchronously
like shutil.copy2, but log errors in copystat instead of raising
- async jupyter_server.services.contents.fileio.async_replace_file(src, dst)#
replace dst with src asynchronously
- jupyter_server.services.contents.fileio.atomic_writing(path, text=True, encoding='utf-8', log=None, **kwargs)#
Context manager to write to a file only if the entire write is successful.
This works by copying the previous file contents to a temporary file in the same directory, and renaming that file back to the target if the context exits with an error. If the context is successful, the new data is synced to disk and the temporary file is removed.
- jupyter_server.services.contents.fileio.copy2_safe(src, dst, log=None)#
copy src to dst
like shutil.copy2, but log errors in copystat instead of raising
- jupyter_server.services.contents.fileio.path_to_intermediate(path)#
Name of the intermediate file used in atomic writes.
The .~ prefix will make Dropbox ignore the temporary file.
- jupyter_server.services.contents.fileio.path_to_invalid(path)#
Name of invalid file after a failed atomic write and subsequent read.
- jupyter_server.services.contents.fileio.replace_file(src, dst)#
replace dst with src
A contents manager that uses the local file system for storage.
- class jupyter_server.services.contents.filemanager.AsyncFileContentsManager(**kwargs)#
Bases:
FileContentsManager
,AsyncFileManagerMixin
,AsyncContentsManager
An async file contents manager.
- async check_folder_size(path)#
limit the size of folders being copied to be no more than the trait max_copy_folder_size_mb to prevent a timeout error
- Return type:
- async copy(from_path, to_path=None)#
Copy an existing file or directory and return its new model. If to_path not specified, it will be the parent directory of from_path. If copying a file and to_path is a directory, filename/directoryname will increment
from_path-Copy#.ext
. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions exceptipynb
. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot. from_path must be a full path to a file or directory.
- async delete_file(path)#
Delete file at path.
- async dir_exists(path)#
Does a directory exist at the given path
- async file_exists(path)#
Does a file exist at the given path
- async get(path, content=True, type=None, format=None, require_hash=False)#
Takes a path for an entity and returns its model
- Parameters:
path (str) – the API path that describes the relative path for the target
content (bool) – Whether to include the contents in the reply
type (str, optional) – The requested type - ‘file’, ‘notebook’, or ‘directory’. Will raise HTTPError 400 if the content doesn’t match.
format (str, optional) – The requested format for file contents. ‘text’ or ‘base64’. Ignored if this returns a notebook or directory model.
require_hash (bool, optional) – Whether to include the hash of the file contents.
- Returns:
model – the contents model. If content=True, returns the contents of the file or directory as well.
- Return type:
- async get_kernel_path(path, model=None)#
Return the initial API path of a kernel associated with a given notebook
Is path a hidden directory or file
- async rename_file(old_path, new_path)#
Rename a file.
- async save(model, path='')#
Save the file model and return the model with no content.
- class jupyter_server.services.contents.filemanager.FileContentsManager(**kwargs)#
Bases:
FileManagerMixin
,ContentsManager
A file contents manager.
- always_delete_dir#
If True, deleting a non-empty directory will always be allowed. WARNING this may result in files being permanently removed; e.g. on Windows, if the data size is too big for the trash/recycle bin the directory will be permanently deleted. If False (default), the non-empty directory will be sent to the trash only if safe. And if
delete_to_trash
is True, the directory won’t be deleted.
- check_folder_size(path)#
limit the size of folders being copied to be no more than the trait max_copy_folder_size_mb to prevent a timeout error
- copy(from_path, to_path=None)#
Copy an existing file or directory and return its new model. If to_path not specified, it will be the parent directory of from_path. If copying a file and to_path is a directory, filename/directoryname will increment
from_path-Copy#.ext
. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions exceptipynb
. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot. from_path must be a full path to a file or directory.
- delete_file(path)#
Delete file at path.
- delete_to_trash#
If True (default), deleting files will send them to the platform’s trash/recycle bin, where they can be recovered. If False, deleting files really deletes them.
- dir_exists(path)#
Does the API-style path refer to an extant directory?
API-style wrapper for os.path.isdir
- exists(path)#
Returns True if the path exists, else returns False.
API-style wrapper for os.path.exists
- file_exists(path)#
Returns True if the file exists, else returns False.
API-style wrapper for os.path.isfile
- get(path, content=True, type=None, format=None, require_hash=False)#
Takes a path for an entity and returns its model
- Parameters:
path (str) – the API path that describes the relative path for the target
content (bool) – Whether to include the contents in the reply
type (str, optional) – The requested type - ‘file’, ‘notebook’, or ‘directory’. Will raise HTTPError 400 if the content doesn’t match.
format (str, optional) – The requested format for file contents. ‘text’ or ‘base64’. Ignored if this returns a notebook or directory model.
require_hash (bool, optional) – Whether to include the hash of the file contents.
- Returns:
model – the contents model. If content=True, returns the contents of the file or directory as well.
- Return type:
- get_kernel_path(path, model=None)#
Return the initial API path of a kernel associated with a given notebook
- info_string()#
Get the information string for the manager.
Does the API style path correspond to a hidden directory or file?
- is_writable(path)#
Does the API style path correspond to a writable directory or file?
- max_copy_folder_size_mb#
The max folder size that can be copied
- rename_file(old_path, new_path)#
Rename a file.
- root_dir#
A trait for unicode strings.
- save(model, path='')#
Save the file model and return the model with no content.
Tornado handlers for the contents web service.
Preliminary documentation at ipython/ipython
- class jupyter_server.services.contents.handlers.CheckpointsHandler(application, request, **kwargs)#
Bases:
ContentsAPIHandler
A checkpoints API handler.
- get(path='')#
get lists checkpoints for a file
- post(path='')#
post creates a new checkpoint
- class jupyter_server.services.contents.handlers.ContentsAPIHandler(application, request, **kwargs)#
Bases:
APIHandler
A contents API handler.
- auth_resource = 'contents'#
- class jupyter_server.services.contents.handlers.ContentsHandler(application, request, **kwargs)#
Bases:
ContentsAPIHandler
A contents handler.
- delete(path='')#
delete a file in the given path
- get(path='')#
Return a model for a file or directory.
A directory model contains a list of models (without content) of the files and directories it contains.
- location_url(path)#
Return the full URL location of a file.
- Parameters:
path (unicode) – The API path of the file, such as “foo/bar.txt”.
- patch(path='')#
PATCH renames a file or directory without re-uploading content.
- post(path='')#
Create a new file in the specified path.
POST creates new files. The server always decides on the name.
- POST /api/contents/path
New untitled, empty file or directory.
- POST /api/contents/path
with body {“copy_from” : “/path/to/OtherNotebook.ipynb”} New copy of OtherNotebook in path
- put(path='')#
Saves the file in the location specified by name and path.
PUT is very similar to POST, but the requester specifies the name, whereas with POST, the server picks the name.
- PUT /api/contents/path/Name.ipynb
Save notebook at
path/Name.ipynb
. Notebook structure is specified incontent
key of JSON request body. If content is not specified, create a new empty notebook.
- class jupyter_server.services.contents.handlers.ModifyCheckpointsHandler(application, request, **kwargs)#
Bases:
ContentsAPIHandler
A checkpoints modification handler.
- delete(path, checkpoint_id)#
delete clears a checkpoint for a given file
- post(path, checkpoint_id)#
post restores a file from a checkpoint
- class jupyter_server.services.contents.handlers.NotebooksRedirectHandler(application, request, **kwargs)#
Bases:
JupyterHandler
Redirect /api/notebooks to /api/contents
- SUPPORTED_METHODS = ('GET', 'PUT', 'PATCH', 'POST', 'DELETE')#
- delete(path)#
Handle a notebooks redirect.
- get(path)#
Handle a notebooks redirect.
- patch(path)#
Handle a notebooks redirect.
- post(path)#
Handle a notebooks redirect.
- put(path)#
Handle a notebooks redirect.
- class jupyter_server.services.contents.handlers.TrustNotebooksHandler(application, request, **kwargs)#
Bases:
JupyterHandler
Handles trust/signing of notebooks
- post(path='')#
Trust a notebook by path.
- jupyter_server.services.contents.handlers.validate_model(model, expect_content=False, expect_hash=False)#
Validate a model returned by a ContentsManager method.
If expect_content is True, then we expect non-null entries for ‘content’ and ‘format’.
If expect_hash is True, then we expect non-null entries for ‘hash’ and ‘hash_algorithm’.
- class jupyter_server.services.contents.largefilemanager.AsyncLargeFileManager(**kwargs)#
Bases:
AsyncFileContentsManager
Handle large file upload asynchronously
- async save(model, path='')#
Save the file model and return the model with no content.
- class jupyter_server.services.contents.largefilemanager.LargeFileManager(**kwargs)#
Bases:
FileContentsManager
Handle large file upload.
- save(model, path='')#
Save the file model and return the model with no content.
A base class for contents managers.
- class jupyter_server.services.contents.manager.AsyncContentsManager(**kwargs)#
Bases:
ContentsManager
Base class for serving files and directories asynchronously.
- checkpoints#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- checkpoints_class#
A trait whose value must be a subclass of a specified class.
- checkpoints_kwargs#
An instance of a Python dict.
One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.
Changed in version 5.0: Added key_trait for validating dict keys.
Changed in version 5.0: Deprecated ambiguous
trait
,traits
args in favor ofvalue_trait
,per_key_traits
.
- async copy(from_path, to_path=None)#
Copy an existing file and return its new model.
If to_path not specified, it will be the parent directory of from_path. If to_path is a directory, filename will increment
from_path-Copy#.ext
. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions exceptipynb
. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot.from_path must be a full path to a file.
- async create_checkpoint(path)#
Create a checkpoint.
- async delete(path)#
Delete a file/directory and any associated checkpoints.
- async delete_checkpoint(checkpoint_id, path)#
Delete a checkpoint for a path by id.
- async delete_file(path)#
Delete the file or directory at path.
- async dir_exists(path)#
Does a directory exist at the given path?
Like os.path.isdir
Override this method in subclasses.
- async exists(path)#
Does a file or directory exist at the given path?
Like os.path.exists
- async file_exists(path='')#
Does a file exist at the given path?
Like os.path.isfile
Override this method in subclasses.
- async get(path, content=True, type=None, format=None, require_hash=False)#
Get a file or directory model.
- Parameters:
require_hash (bool) – Whether the file hash must be returned or not.
2.11* (*Changed in version) –
- async increment_filename(filename, path='', insert='')#
Increment a filename until it is unique.
- Parameters:
filename (unicode) – The name of a file, including extension
path (unicode) – The API path of the target’s directory
insert (unicode) – The characters to insert after the base filename
- Returns:
name – A filename that is unique, based on the input filename.
- Return type:
unicode
Is path a hidden directory or file?
- async list_checkpoints(path)#
List the checkpoints for a path.
- async new(model=None, path='')#
Create a new file or directory and return its model with no content.
To create a new untitled entity in a directory, use
new_untitled
.
- async new_untitled(path='', type='', ext='')#
Create a new untitled file or directory in path
path must be a directory
File extension can be specified.
Use
new
to create files with a fully specified path (including filename).
- async rename(old_path, new_path)#
Rename a file and any checkpoints associated with that file.
- async rename_file(old_path, new_path)#
Rename a file or directory.
- async restore_checkpoint(checkpoint_id, path)#
Restore a checkpoint.
- async save(model, path)#
Save a file or directory model to path.
Should return the saved model with no content. Save implementations should call self.run_pre_save_hook(model=model, path=path) prior to writing any data.
- async trust_notebook(path)#
Explicitly trust a notebook
- Parameters:
path (str) – The path of a notebook
- async update(model, path)#
Update the file’s path
For use in PATCH requests, to enable renaming a file without re-uploading its contents. Only used for renaming at the moment.
- class jupyter_server.services.contents.manager.ContentsManager(**kwargs)#
Bases:
LoggingConfigurable
Base class for serving files and directories.
This serves any text or binary file, as well as directories, with special handling for JSON notebook documents.
Most APIs take a path argument, which is always an API-style unicode path, and always refers to a directory.
unicode, not url-escaped
‘/’-separated
leading and trailing ‘/’ will be stripped
if unspecified, path defaults to ‘’, indicating the root path.
Allow access to hidden files
- check_and_sign(nb, path='')#
Check for trusted cells, and sign the notebook.
Called as a part of saving notebooks.
- checkpoints#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- checkpoints_class#
A trait whose value must be a subclass of a specified class.
- checkpoints_kwargs#
An instance of a Python dict.
One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.
Changed in version 5.0: Added key_trait for validating dict keys.
Changed in version 5.0: Deprecated ambiguous
trait
,traits
args in favor ofvalue_trait
,per_key_traits
.
- copy(from_path, to_path=None)#
Copy an existing file and return its new model.
If to_path not specified, it will be the parent directory of from_path. If to_path is a directory, filename will increment
from_path-Copy#.ext
. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions exceptipynb
. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot.from_path must be a full path to a file.
- create_checkpoint(path)#
Create a checkpoint.
- delete(path)#
Delete a file/directory and any associated checkpoints.
- delete_checkpoint(checkpoint_id, path)#
- delete_file(path)#
Delete the file or directory at path.
- dir_exists(path)#
Does a directory exist at the given path?
Like os.path.isdir
Override this method in subclasses.
- emit(data)#
Emit event using the core event schema from Jupyter Server’s Contents Manager.
- event_logger#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- event_schema_id = 'https://events.jupyter.org/jupyter_server/contents_service/v1'#
- exists(path)#
Does a file or directory exist at the given path?
Like os.path.exists
- file_exists(path='')#
Does a file exist at the given path?
Like os.path.isfile
Override this method in subclasses.
- files_handler_class#
handler class to use when serving raw file requests.
Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.
Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.
Access to these files should be Authenticated.
- files_handler_params#
Extra parameters to pass to files_handler_class.
For example, StaticFileHandlers generally expect a
path
argument specifying the root directory from which to serve files.
- get(path, content=True, type=None, format=None, require_hash=False)#
Get a file or directory model.
- Parameters:
require_hash (bool) – Whether the file hash must be returned or not.
2.11* (*Changed in version) –
- get_extra_handlers()#
Return additional handlers
Default: self.files_handler_class on /files/.*
- get_kernel_path(path, model=None)#
Return the API path for the kernel
KernelManagers can turn this value into a filesystem path, or ignore it altogether.
The default value here will start kernels in the directory of the notebook server. FileContentsManager overrides this to use the directory containing the notebook.
- hide_globs#
Glob patterns to hide in file and directory listings.
- increment_filename(filename, path='', insert='')#
Increment a filename until it is unique.
- Parameters:
filename (unicode) – The name of a file, including extension
path (unicode) – The API path of the target’s directory
insert (unicode) – The characters to insert after the base filename
- Returns:
name – A filename that is unique, based on the input filename.
- Return type:
unicode
- info_string()#
The information string for the manager.
Is path a hidden directory or file?
- list_checkpoints(path)#
- log_info()#
Log the information string for the manager.
- mark_trusted_cells(nb, path='')#
Mark cells as trusted if the notebook signature matches.
Called as a part of loading notebooks.
- new(model=None, path='')#
Create a new file or directory and return its model with no content.
To create a new untitled entity in a directory, use
new_untitled
.
- new_untitled(path='', type='', ext='')#
Create a new untitled file or directory in path
path must be a directory
File extension can be specified.
Use
new
to create files with a fully specified path (including filename).
- notary#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- post_save_hook#
Python callable or importstring thereof
to be called on the path of a file just saved.
This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.
It will be called as (all arguments passed by keyword):
hook(os_path=os_path, model=model, contents_manager=instance)
path: the filesystem path to the file just written
model: the model representing the file
contents_manager: this ContentsManager instance
- pre_save_hook#
Python callable or importstring thereof
To be called on a contents model prior to save.
This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.
It will be called as (all arguments passed by keyword):
hook(path=path, model=model, contents_manager=self)
model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.
path: the API path of the save destination
contents_manager: this ContentsManager instance
- preferred_dir#
Preferred starting directory to use for notebooks. This is an API path (
/
separated, relative to root dir)
- register_post_save_hook(hook)#
Register a post save hook.
- register_pre_save_hook(hook)#
Register a pre save hook.
- rename(old_path, new_path)#
Rename a file and any checkpoints associated with that file.
- rename_file(old_path, new_path)#
Rename a file or directory.
- restore_checkpoint(checkpoint_id, path)#
Restore a checkpoint.
- root_dir#
A trait for unicode strings.
- run_post_save_hook(model, os_path)#
Run the post-save hook if defined, and log errors
- run_post_save_hooks(model, os_path)#
Run the post-save hooks if any, and log errors
- run_pre_save_hook(model, path, **kwargs)#
Run the pre-save hook if defined, and log errors
- run_pre_save_hooks(model, path, **kwargs)#
Run the pre-save hooks if any, and log errors
- save(model, path)#
Save a file or directory model to path.
Should return the saved model with no content. Save implementations should call self.run_pre_save_hook(model=model, path=path) prior to writing any data.
- should_list(name)#
Should this file/directory name be displayed in a listing?
- untitled_directory#
The base name used when creating untitled directories.
- untitled_file#
The base name used when creating untitled files.
- untitled_notebook#
The base name used when creating untitled notebooks.
- update(model, path)#
Update the file’s path
For use in PATCH requests, to enable renaming a file without re-uploading its contents. Only used for renaming at the moment.
- validate_notebook_model(model, validation_error=None)#
Add failed-validation message to model
Module contents#
jupyter_server.services.events package#
Submodules#
A Websocket Handler for emitting Jupyter server events.
New in version 2.0.
- class jupyter_server.services.events.handlers.EventHandler(application, request, **kwargs)#
Bases:
APIHandler
REST api handler for events
- auth_resource = 'events'#
- post()#
Emit an event.
- class jupyter_server.services.events.handlers.SubscribeWebsocket(application, request, **kwargs)#
Bases:
JupyterHandler
,WebSocketHandler
Websocket handler for subscribing to events
- auth_resource = 'events'#
- get(*args, **kwargs)#
Get an event socket.
- on_close()#
Handle a socket close.
- open()#
Routes events that are emitted by Jupyter Server’s EventBus to a WebSocket client in the browser.
- async pre_get()#
Handles authorization when attempting to subscribe to events emitted by Jupyter Server’s eventbus.
- jupyter_server.services.events.handlers.get_timestamp(data)#
Parses timestamp from the JSON request body
Module contents#
jupyter_server.services.kernels package#
Subpackages#
jupyter_server.services.kernels.connection package#
Submodules#
- class jupyter_server.services.kernels.connection.abc.KernelWebsocketConnectionABC#
Bases:
ABC
This class defines a minimal interface that should be used to bridge the connection between Jupyter Server’s websocket API and a kernel’s ZMQ socket interface.
- abstract async connect()#
Connect the kernel websocket to the kernel ZMQ connections
- abstract async disconnect()#
Disconnect the kernel websocket from the kernel ZMQ connections
- abstract handle_incoming_message(incoming_msg)#
Broker the incoming websocket message to the appropriate ZMQ channel.
- Return type:
- abstract handle_outgoing_message(stream, outgoing_msg)#
Broker outgoing ZMQ messages to the kernel websocket.
- Return type:
Kernel connection helpers.
- class jupyter_server.services.kernels.connection.base.BaseKernelWebsocketConnection(**kwargs)#
Bases:
LoggingConfigurable
A configurable base class for connecting Kernel WebSockets to ZMQ sockets.
- async connect()#
Handle a connect.
- async disconnect()#
Handle a disconnect.
- property kernel_id#
The kernel id.
- kernel_info_timeout#
A float trait.
- property kernel_manager#
The kernel manager.
- kernel_ws_protocol#
None). If an empty string is passed, select the legacy protocol. If None, the selected protocol will depend on what the front-end supports (usually the most recent protocol supported by the back-end and the front-end).
- Type:
Preferred kernel message protocol over websocket to use (default
- property multi_kernel_manager#
The multi kernel manager.
- session#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- property session_id#
The session id.
- websocket_handler#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- jupyter_server.services.kernels.connection.base.deserialize_binary_message(bmsg)#
deserialize a message from a binary blog
Header:
4 bytes: number of msg parts (nbufs) as 32b int 4 * nbufs bytes: offset for each buffer as integer as 32b int
Offsets are from the start of the buffer, including the header.
- Return type:
message dictionary
- jupyter_server.services.kernels.connection.base.deserialize_msg_from_ws_v1(ws_msg)#
Deserialize a message using the v1 protocol.
- jupyter_server.services.kernels.connection.base.serialize_binary_message(msg)#
serialize a message as a binary blob
Header:
4 bytes: number of msg parts (nbufs) as 32b int 4 * nbufs bytes: offset for each buffer as integer as 32b int
Offsets are from the start of the buffer, including the header.
- Return type:
The message serialized to bytes.
- jupyter_server.services.kernels.connection.base.serialize_msg_to_ws_v1(msg_or_list, channel, pack=None)#
Serialize a message using the v1 protocol.
An implementation of a kernel connection.
- class jupyter_server.services.kernels.connection.channels.ZMQChannelsWebsocketConnection(**kwargs)#
Bases:
BaseKernelWebsocketConnection
A Jupyter Server Websocket Connection
- channels#
An instance of a Python dict.
One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.
Changed in version 5.0: Added key_trait for validating dict keys.
Changed in version 5.0: Deprecated ambiguous
trait
,traits
args in favor ofvalue_trait
,per_key_traits
.
- close()#
Close the connection.
- async classmethod close_all()#
Tornado does not provide a way to close open sockets, so add one.
- connect()#
Handle a connection.
- create_stream()#
Create a stream.
- disconnect()#
Handle a disconnect.
- get_part(field, value, msg_list)#
Get a part of a message.
- handle_incoming_message(incoming_msg)#
Handle incoming messages from Websocket to ZMQ Sockets.
- Return type:
- handle_outgoing_message(stream, outgoing_msg)#
Handle the outgoing messages from ZMQ sockets to Websocket.
- Return type:
- iopub_data_rate_limit#
(bytes/sec) Maximum rate at which stream output can be sent on iopub before they are limited.
- iopub_msg_rate_limit#
(msgs/sec) Maximum rate at which messages can be sent on iopub before they are limited.
- kernel_info_channel#
A trait which allows any value.
- limit_rate#
True). If True, use iopub_msg_rate_limit, iopub_data_rate_limit and/or rate_limit_window to tune the rate.
- Type:
Whether to limit the rate of IOPub messages (default
- nudge()#
Nudge the zmq connections with kernel_info_requests Returns a Future that will resolve when we have received a shell or control reply and at least one iopub message, ensuring that zmq subscriptions are established, sockets are fully connected, and kernel is responsive. Keeps retrying kernel_info_request until these are both received.
- on_kernel_restarted()#
Handle a kernel restart.
- on_restart_failed()#
Handle a kernel restart failure.
- async prepare()#
Prepare a kernel connection.
- rate_limit_window#
(sec) Time window used to check the message and data rate limits.
- request_kernel_info()#
send a request for kernel_info
- session_key#
A trait for unicode strings.
- property subprotocol#
The sub protocol.
- websocket_handler#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- property write_message#
Alias to the websocket handler’s write_message method.
- write_stderr(error_message, parent_header)#
Write a message to stderr.
Module contents#
Submodules#
Tornado handlers for kernels.
Preliminary documentation at ipython/ipython
- class jupyter_server.services.kernels.handlers.KernelActionHandler(application, request, **kwargs)#
Bases:
KernelsAPIHandler
A kernel action API handler.
- post(kernel_id, action)#
Interrupt or restart a kernel.
- class jupyter_server.services.kernels.handlers.KernelHandler(application, request, **kwargs)#
Bases:
KernelsAPIHandler
A kernel API handler.
- delete(kernel_id)#
Remove a kernel.
- get(kernel_id)#
Get a kernel model.
- class jupyter_server.services.kernels.handlers.KernelsAPIHandler(application, request, **kwargs)#
Bases:
APIHandler
A kernels API handler.
- auth_resource = 'kernels'#
- class jupyter_server.services.kernels.handlers.MainKernelHandler(application, request, **kwargs)#
Bases:
KernelsAPIHandler
The root kernel handler.
- get()#
Get the list of running kernels.
- post()#
Start a kernel.
A MultiKernelManager for use in the Jupyter server
raises HTTPErrors
creates REST API models
- class jupyter_server.services.kernels.kernelmanager.AsyncMappingKernelManager(**kwargs: Any)#
Bases:
MappingKernelManager
,AsyncMultiKernelManager
An asynchronous mapping kernel manager.
- class jupyter_server.services.kernels.kernelmanager.MappingKernelManager(**kwargs: Any)#
Bases:
MultiKernelManager
A KernelManager that handles - File mapping - HTTP error handling - Kernel message filtering
- allow_tracebacks#
Whether to send tracebacks to clients on exceptions.
- allowed_message_types#
White list of allowed kernel message types. When the list is empty, all message types are allowed.
- buffer_offline_messages#
Whether messages from kernels whose frontends have disconnected should be buffered in-memory.
When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.
Disable if long-running kernels will produce too much output while no frontends are connected.
- cull_busy#
Whether to consider culling kernels which are busy. Only effective if cull_idle_timeout > 0.
- cull_connected#
Whether to consider culling kernels which have one or more connections. Only effective if cull_idle_timeout > 0.
- cull_idle_timeout#
Timeout (in seconds) after which a kernel is considered idle and ready to be culled. Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.
- cull_interval#
The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.
- cull_interval_default = 300#
- async cull_kernel_if_idle(kernel_id)#
Cull a kernel if it is idle.
- async cull_kernels()#
Handle culling kernels.
- cwd_for_path(path, **kwargs)#
Turn API path into absolute OS path.
- get_buffer(kernel_id, session_key)#
Get the buffer for a given kernel
- initialize_culler()#
Start idle culler if ‘cull_idle_timeout’ is greater than zero.
Regardless of that value, set flag that we’ve been here.
- kernel_argv#
An instance of a Python list.
- kernel_info_timeout#
Timeout for giving up on a kernel (in seconds).
On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).
- kernel_model(kernel_id)#
Return a JSON-safe dict representing a kernel
For use in representing kernels in the JSON APIs.
- last_kernel_activity#
The last activity on any kernel, including shutting down a kernel
- list_kernels()#
Returns a list of kernel_id’s of kernels running.
- notify_connect(kernel_id)#
Notice a new connection to a kernel
- notify_disconnect(kernel_id)#
Notice a disconnection from a kernel
- ports_changed(kernel_id)#
Used by ZMQChannelsHandler to determine how to coordinate nudge and replays.
Ports are captured when starting a kernel (via MappingKernelManager). Ports are considered changed (following restarts) if the referenced KernelManager is using a set of ports different from those captured at startup. If changes are detected, the captured set is updated and a value of True is returned.
NOTE: Use is exclusive to ZMQChannelsHandler because this object is a singleton instance while ZMQChannelsHandler instances are per WebSocket connection that can vary per kernel lifetime.
- async restart_kernel(kernel_id, now=False)#
Restart a kernel by kernel_id
- root_dir#
A trait for unicode strings.
- async shutdown_kernel(kernel_id, now=False, restart=False)#
Shutdown a kernel by kernel_id
- start_buffering(kernel_id, session_key, channels)#
Start buffering messages for a kernel
- Parameters:
kernel_id (str) – The id of the kernel to stop buffering.
session_key (str) – The session_key, if any, that should get the buffer. If the session_key matches the current buffered session_key, the buffer will be returned.
channels (dict({'channel': ZMQStream})) – The zmq channels whose messages should be buffered.
- async start_kernel(*, kernel_id=None, path=None, **kwargs)#
Start a kernel for a session and return its kernel_id.
- Parameters:
kernel_id (uuid (str)) – The uuid to associate the new kernel with. If this is not None, this kernel will be persistent whenever it is requested.
path (API path) – The API path (unicode, ‘/’ delimited) for the cwd. Will be transformed to an OS path relative to root_dir.
kernel_name (str) – The name identifying which kernel spec to launch. This is ignored if an existing kernel is returned, but it may be checked in the future.
- Return type:
str
- start_watching_activity(kernel_id)#
Start watching IOPub messages on a kernel for activity.
update last_activity on every message
record execution_state from status messages
- stop_buffering(kernel_id)#
Stop buffering kernel messages
- Parameters:
kernel_id (str) – The id of the kernel to stop buffering.
- stop_watching_activity(kernel_id)#
Stop watching IOPub messages on a kernel for activity.
- traceback_replacement_message#
Message to print when allow_tracebacks is False, and an exception occurs
- class jupyter_server.services.kernels.kernelmanager.ServerKernelManager(*args, **kwargs)#
Bases:
AsyncIOLoopKernelManager
A server-specific kernel manager.
- emit(schema_id, data)#
Emit an event from the kernel manager.
- event_logger#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- execution_state#
The current execution state of the kernel
- extra_event_schema_paths: List[str]#
A list of pathlib.Path objects pointing at to register with the kernel manager’s eventlogger.
- async interrupt_kernel(*args, **kwargs)#
Interrupts the kernel by sending it a signal.
Unlike
signal_kernel
, this operation is well supported on all platforms.
- last_activity#
The last activity on the kernel
- reason#
The reason for the last failure against the kernel
- async restart_kernel(*args, **kwargs)#
Restarts a kernel with the arguments that were used to launch it.
- Parameters:
now (bool, optional) –
If True, the kernel is forcefully restarted immediately, without having a chance to do any cleanup action. Otherwise the kernel is given 1s to clean up before a forceful restart is issued.
In all cases the kernel is restarted, the only difference is whether it is given a chance to perform a clean shutdown or not.
newports (bool, optional) – If the old kernel was launched with random ports, this flag decides whether the same ports and connection file will be used again. If False, the same ports and connection file are used. This is the default. If True, new random port numbers are chosen and a new connection file is written. It is still possible that the newly chosen random port numbers happen to be the same as the old ones.
**kw (optional) – Any options specified here will overwrite those used to launch the kernel.
- async shutdown_kernel(*args, **kwargs)#
Attempts to stop the kernel process cleanly.
This attempts to shutdown the kernels cleanly by:
Sending it a shutdown message over the control channel.
If that fails, the kernel is shutdown forcibly by sending it a signal.
- async start_kernel(*args, **kwargs)#
Starts a kernel on this host in a separate process.
If random ports (port=0) are being used, this method must be called before the channels are created.
- Parameters:
**kw (optional) – keyword arguments that are passed down to build the kernel_cmd and launching the kernel (e.g. Popen kwargs).
- jupyter_server.services.kernels.kernelmanager.emit_kernel_action_event(success_msg='')#
Decorate kernel action methods to begin emitting jupyter kernel action events.
- Parameters:
success_msg (str) – A formattable string that’s passed to the message field of the emitted event when the action succeeds. You can include the kernel_id, kernel_name, or action in the message using a formatted string argument, e.g. “{kernel_id} succeeded to {action}.”
error_msg (str) – A formattable string that’s passed to the message field of the emitted event when the action fails. You can include the kernel_id, kernel_name, or action in the message using a formatted string argument, e.g. “{kernel_id} failed to {action}.”
- Return type:
Tornado handlers for WebSocket <-> ZMQ sockets.
- class jupyter_server.services.kernels.websocket.KernelWebsocketHandler(application, request, **kwargs)#
Bases:
WebSocketMixin
,WebSocketHandler
,JupyterHandler
The kernels websocket should connect
- auth_resource = 'kernels'#
- get(kernel_id)#
Handle a get request for a kernel.
- get_compression_options()#
Get the socket connection options.
- property kernel_websocket_connection_class#
The kernel websocket connection class.
- on_close()#
Handle a socket closure.
- on_message(ws_message)#
Get a kernel message from the websocket and turn it into a ZMQ message.
- async open(kernel_id)#
Open a kernel websocket.
- async pre_get()#
Handle a pre_get.
- select_subprotocol(subprotocols)#
Select the sub protocol for the socket.
- set_default_headers()#
Undo the set_default_headers in JupyterHandler
which doesn’t make sense for websockets
Module contents#
jupyter_server.services.kernelspecs package#
Submodules#
Tornado handlers for kernel specifications.
Preliminary documentation at ipython/ipython
- class jupyter_server.services.kernelspecs.handlers.KernelSpecHandler(application, request, **kwargs)#
Bases:
KernelSpecsAPIHandler
A handler for an individual kernel spec.
- get(kernel_name)#
Get a kernel spec model.
- class jupyter_server.services.kernelspecs.handlers.KernelSpecsAPIHandler(application, request, **kwargs)#
Bases:
APIHandler
A kernel spec API handler.
- auth_resource = 'kernelspecs'#
- class jupyter_server.services.kernelspecs.handlers.MainKernelSpecHandler(application, request, **kwargs)#
Bases:
KernelSpecsAPIHandler
The root kernel spec handler.
- get()#
Get the list of kernel specs.
- jupyter_server.services.kernelspecs.handlers.is_kernelspec_model(spec_dict)#
Returns True if spec_dict is already in proper form. This will occur when using a gateway.
- jupyter_server.services.kernelspecs.handlers.kernelspec_model(handler, name, spec_dict, resource_dir)#
Load a KernelSpec by name and return the REST API model
Module contents#
jupyter_server.services.nbconvert package#
Submodules#
API Handlers for nbconvert.
- class jupyter_server.services.nbconvert.handlers.NbconvertRootHandler(application, request, **kwargs)#
Bases:
APIHandler
The nbconvert root API handler.
- auth_resource = 'nbconvert'#
- get()#
Get the list of nbconvert exporters.
- initialize(**kwargs)#
Initialize an nbconvert root handler.
Module contents#
jupyter_server.services.security package#
Submodules#
Tornado handlers for security logging.
- class jupyter_server.services.security.handlers.CSPReportHandler(application, request, **kwargs)#
Bases:
APIHandler
Accepts a content security policy violation report
- auth_resource = 'csp'#
- check_xsrf_cookie()#
Don’t check XSRF for CSP reports.
- post()#
Log a content security policy violation report
- skip_check_origin()#
Don’t check origin when reporting origin-check violations!
Module contents#
jupyter_server.services.sessions package#
Submodules#
Tornado handlers for the sessions web service.
Preliminary documentation at ipython/ipython
- class jupyter_server.services.sessions.handlers.SessionHandler(application, request, **kwargs)#
Bases:
SessionsAPIHandler
A handler for a single session.
- delete(session_id)#
Delete the session with given session_id.
- get(session_id)#
Get the JSON model for a single session.
- patch(session_id)#
Patch updates sessions:
path updates session to track renamed paths
kernel.name starts a new kernel with a given kernelspec
- class jupyter_server.services.sessions.handlers.SessionRootHandler(application, request, **kwargs)#
Bases:
SessionsAPIHandler
A Session Root API handler.
- get()#
Get a list of running sessions.
- post()#
Create a new session.
- class jupyter_server.services.sessions.handlers.SessionsAPIHandler(application, request, **kwargs)#
Bases:
APIHandler
A Sessions API handler.
- auth_resource = 'sessions'#
A base class session manager.
- class jupyter_server.services.sessions.sessionmanager.KernelSessionRecord(session_id=None, kernel_id=None)#
Bases:
object
A record object for tracking a Jupyter Server Kernel Session.
Two records that share a session_id must also share a kernel_id, while kernels can have multiple session (and thereby) session_ids associated with them.
- exception jupyter_server.services.sessions.sessionmanager.KernelSessionRecordConflict#
Bases:
Exception
Exception class to use when two KernelSessionRecords cannot merge because of conflicting data.
- class jupyter_server.services.sessions.sessionmanager.KernelSessionRecordList(*records)#
Bases:
object
An object for storing and managing a list of KernelSessionRecords.
When adding a record to the list, the KernelSessionRecordList first checks if the record already exists in the list. If it does, the record will be updated with the new information; otherwise, it will be appended.
- get(record)#
Return a full KernelSessionRecord from a session_id, kernel_id, or incomplete KernelSessionRecord.
- Return type:
- remove(record)#
Remove a record if its found in the list. If it’s not found, do nothing.
- Return type:
- class jupyter_server.services.sessions.sessionmanager.SessionManager(**kwargs: Any)#
Bases:
LoggingConfigurable
A session manager.
- close()#
Close the sqlite connection
- property connection#
Start a database connection
- contents_manager#
A trait whose value must be an instance of a class in a specified list of classes. The value can also be an instance of a subclass of the specified classes. Subclasses can declare default classes by overriding the klass attribute
- async create_session(path=None, name=None, type=None, kernel_name=None, kernel_id=None)#
Creates a session and returns its model
- property cursor#
Start a cursor and create a database called ‘session’
- database_filepath#
` setting from sqlite3) and does not persist when the current Jupyter Server shuts down.
- Type:
The filesystem path to SQLite Database file (e.g. /path/to/session_database.db). By default, the session database is stored in-memory (i.e. `
- Type:
memory
- async delete_session(session_id)#
Deletes the row in the session database with given session_id
- get_kernel_env(path, name=None)#
Return the environment variables that need to be set in the kernel
- async get_session(**kwargs)#
Returns the model for a particular session.
Takes a keyword argument and searches for the value in the session database, then returns the rest of the session’s info.
- async kernel_culled(kernel_id)#
Checks if the kernel is still considered alive and returns true if its not found.
- Return type:
- kernel_manager#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- async list_sessions()#
Returns a list of dictionaries containing all the information from the session database
- async row_to_model(row, tolerate_culled=False)#
Takes sqlite database session row and turns it into a dictionary
- async save_session(session_id, path=None, name=None, type=None, kernel_id=None)#
Saves the items for the session with the given session_id
Given a session_id (and any other of the arguments), this method creates a row in the sqlite session database that holds the information for a session.
- Parameters:
- Returns:
model – a dictionary of the session model
- Return type:
- async session_exists(path)#
Check to see if the session of a given name exists
- async start_kernel_for_session(session_id, path, name, type, kernel_name)#
Start a new kernel for a given session.
- Parameters:
session_id (str) – uuid for the session; this method must be given a session_id
path (str) – the path for the given session - seem to be a session id sometime.
name (str) – Usually the model name, like the filename associated with current kernel.
type (str) – the type of the session
kernel_name (str) – the name of the kernel specification to use. The default kernel name will be used if not provided.
- Return type:
- async update_session(session_id, **kwargs)#
Updates the values in the session database.
Changes the values of the session with the given session_id with the values from the keyword arguments.
Module contents#
Submodules#
HTTP handler to shut down the Jupyter server.
- class jupyter_server.services.shutdown.ShutdownHandler(application, request, **kwargs)#
Bases:
JupyterHandler
A shutdown API handler.
- auth_resource = 'server'#
- post()#
Shut down the server.
Module contents#
jupyter_server.view package#
Submodules#
Tornado handlers for viewing HTML files.
- class jupyter_server.view.handlers.ViewHandler(application, request, **kwargs)#
Bases:
JupyterHandler
Render HTML files within an iframe.
- auth_resource = 'contents'#
- get(path)#
Get a view on a given path.
Module contents#
Tornado handlers for viewing HTML files.
Submodules#
Manager to read and modify config data in JSON files.
- class jupyter_server.config_manager.BaseJSONConfigManager(**kwargs)#
Bases:
LoggingConfigurable
General JSON config manager
Deals with persisting/storing config in a json file with optionally default values in a {section_name}.d directory.
- config_dir#
A trait for unicode strings.
- directory(section_name)#
Returns the directory name for the section name: {config_dir}/{section_name}.d
- Return type:
- file_name(section_name)#
Returns the json filename for the section_name: {config_dir}/{section_name}.json
- Return type:
- get(section_name, include_root=True)#
Retrieve the config data for the specified section.
Returns the data as a dictionary, or an empty dictionary if the file doesn’t exist.
When include_root is False, it will not read the root .json file, effectively returning the default values.
- read_directory#
A boolean (True, False) trait.
- jupyter_server.config_manager.recursive_update(target, new)#
Recursively update one dictionary using another.
None values will delete their keys.
- Return type:
- jupyter_server.config_manager.remove_defaults(data, defaults)#
Recursively remove items from dict that are already in defaults
- Return type:
Log utilities.
- jupyter_server.log.log_request(handler)#
log a bit more information about each request than tornado’s default
move static file get success to debug-level (reduces noise)
get proxied IP instead of proxy IP
log referer for redirect and failed requests
log user-agent for failed requests
A tornado based Jupyter server.
- class jupyter_server.serverapp.JupyterPasswordApp(**kwargs)#
Bases:
JupyterApp
Set a password for the Jupyter server.
Setting a password secures the Jupyter server and removes the need for token-based authentication.
- description: str = 'Set a password for the Jupyter server.\n\n Setting a password secures the Jupyter server\n and removes the need for token-based authentication.\n '#
- start()#
Start the password app.
- class jupyter_server.serverapp.JupyterServerListApp(**kwargs)#
Bases:
JupyterApp
An application to list running Jupyter servers.
- description: str = 'List currently running Jupyter servers.'#
- flags: StrDict = {'json': ({'JupyterServerListApp': {'json': True}}, 'Produce machine-readable JSON object on each line of output.'), 'jsonlist': ({'JupyterServerListApp': {'jsonlist': True}}, 'Produce machine-readable JSON list output.')}#
- json#
If True, each line of output will be a JSON object with the details from the server info file. For a JSON list output, see the JupyterServerListApp.jsonlist configuration value
- jsonlist#
If True, the output will be a JSON list of objects, one per active Jupyer server, each with the details from the relevant server info file.
- start()#
Start the server list application.
- version: str = '2.14.0'#
- class jupyter_server.serverapp.JupyterServerStopApp(**kwargs)#
Bases:
JupyterApp
An application to stop a Jupyter server.
- description: str = 'Stop currently running Jupyter server for a given port'#
- parse_command_line(argv=None)#
Parse command line options.
- port#
Port of the server to be killed. Default 8888
- shutdown_server(server)#
Shut down a server.
- sock#
UNIX socket of the server to be killed.
- start()#
Start the server stop app.
- version: str = '2.14.0'#
- class jupyter_server.serverapp.ServerApp(**kwargs)#
Bases:
JupyterApp
The Jupyter Server application class.
- aliases: StrDict#
An instance of a Python dict.
One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.
Changed in version 5.0: Added key_trait for validating dict keys.
Changed in version 5.0: Deprecated ambiguous
trait
,traits
args in favor ofvalue_trait
,per_key_traits
.
- allow_credentials#
true header
- Type:
Set the Access-Control-Allow-Credentials
- allow_external_kernels#
Whether or not to allow external kernels, whose connection files are placed in external_connection_dir.
- allow_origin#
Set the Access-Control-Allow-Origin header
Use ‘*’ to allow any origin to access your server.
Takes precedence over allow_origin_pat.
- allow_origin_pat#
Use a regular expression for the Access-Control-Allow-Origin header
Requests from an origin matching the expression will get replies with:
Access-Control-Allow-Origin: origin
where
origin
is the origin of the request.Ignored if allow_origin is set.
- allow_password_change#
DEPRECATED in 2.0. Use PasswordIdentityProvider.allow_password_change
- allow_remote_access#
Allow requests where the Host header doesn’t point to a local server
By default, requests get a 403 forbidden response if the ‘Host’ header shows that the browser thinks it’s on a non-local domain. Setting this option to True disables this check.
This protects against ‘DNS rebinding’ attacks, where a remote web server serves you a page and then changes its DNS to send later requests to a local IP, bypassing same-origin checks.
Local IP addresses (such as 127.0.0.1 and ::1) are allowed as local, along with hostnames configured in local_hostnames.
- allow_root#
Whether to allow the user to run the server as root.
- allow_unauthenticated_access#
Allow unauthenticated access to endpoints without authentication rule.
When set to
True
(default in jupyter-server 2.0, subject to change in the future), any request to an endpoint without an authentication rule (either@tornado.web.authenticated
, or@allow_unauthenticated
) will be permitted, regardless of whether user has logged in or not.When set to
False
, logging in will be required for access to each endpoint, excluding the endpoints marked with@allow_unauthenticated
decorator.This option can be configured using
JUPYTER_SERVER_ALLOW_UNAUTHENTICATED_ACCESS
environment variable: any non-empty value other than “true” and “yes” will prevent unauthenticated access to endpoints without@allow_unauthenticated
.
- authenticate_prometheus#
” Require authentication to access prometheus metrics.
- authorizer_class#
The authorizer class to use.
- autoreload#
Reload the webapp when changes are made to any Python src files.
- base_url#
The base URL for the Jupyter server.
Leading and trailing slashes can be omitted, and will automatically be added.
- browser#
Specify what command to use to invoke a web browser when starting the server. If not specified, the default browser will be determined by the
webbrowser
standard library module, which allows setting of the BROWSER environment variable to override it.
- browser_open_file#
A trait for unicode strings.
- browser_open_file_to_run#
A trait for unicode strings.
- certfile#
The full path to an SSL/TLS certificate file.
- classes: ClassesType = [<class 'jupyter_client.manager.KernelManager'>, <class 'jupyter_client.session.Session'>, <class 'jupyter_server.services.kernels.kernelmanager.MappingKernelManager'>, <class 'jupyter_client.kernelspec.KernelSpecManager'>, <class 'jupyter_server.services.kernels.kernelmanager.AsyncMappingKernelManager'>, <class 'jupyter_server.services.contents.manager.ContentsManager'>, <class 'jupyter_server.services.contents.filemanager.FileContentsManager'>, <class 'jupyter_server.services.contents.manager.AsyncContentsManager'>, <class 'jupyter_server.services.contents.filemanager.AsyncFileContentsManager'>, <class 'nbformat.sign.NotebookNotary'>, <class 'jupyter_server.gateway.managers.GatewayMappingKernelManager'>, <class 'jupyter_server.gateway.managers.GatewayKernelSpecManager'>, <class 'jupyter_server.gateway.managers.GatewaySessionManager'>, <class 'jupyter_server.gateway.connections.GatewayWebSocketConnection'>, <class 'jupyter_server.gateway.gateway_client.GatewayClient'>, <class 'jupyter_server.auth.authorizer.Authorizer'>, <class 'jupyter_events.logger.EventLogger'>, <class 'jupyter_server.services.kernels.connection.channels.ZMQChannelsWebsocketConnection'>]#
- async cleanup_kernels()#
Shutdown all kernels.
The kernels will shutdown themselves when this process no longer exists, but explicit shutdown allows the KernelManagers to cleanup the connection files.
- Return type:
- client_ca#
The full path to a certificate authority certificate for SSL/TLS client authentication.
- config_manager_class#
The config manager class to use
- contents_manager_class#
The content manager class to use.
- cookie_options#
DEPRECATED. Use IdentityProvider.cookie_options
- cookie_secret#
The random bytes used to secure cookies. By default this is a new random number every time you start the server. Set it to a value in a config file to enable logins to persist across server sessions.
Note: Cookie secrets should be kept private, do not share config files with cookie_secret stored in plaintext (you can read the value from a file).
- cookie_secret_file#
The file where the cookie secret is stored.
- custom_display_url#
Override URL shown to users.
Replace actual URL, including protocol, address, port and base URL, with the given value when displaying URL to the users. Do not change the actual connection URL. If authentication token is enabled, the token is added to the custom URL automatically.
This option is intended to be used when the URL to display to the user cannot be determined reliably by the Jupyter server (proxified or containerized setups for example).
- default_services = ('api', 'auth', 'config', 'contents', 'files', 'kernels', 'kernelspecs', 'nbconvert', 'security', 'sessions', 'shutdown', 'view', 'events')#
- default_url#
The default URL to redirect to from
/
- description: str = 'The Jupyter Server.\n\n This launches a Tornado-based Jupyter Server.'#
- disable_check_xsrf#
Disable cross-site-request-forgery protection
Jupyter server includes protection from cross-site request forgeries, requiring API requests to either:
originate from pages served by this server (validated with XSRF cookie and token), or
authenticate with a token
Some anonymous compute resources still desire the ability to run code, completely without authentication. These services can disable all authentication and security checks, with the full knowledge of what that implies.
- property display_url: str#
Human readable string with URLs for interacting with the running Jupyter Server
- event_logger#
An EventLogger for emitting structured event data from Jupyter Server and extensions.
- examples: str | Unicode[str, str | bytes] = '\njupyter server # start the server\njupyter server --certfile=mycert.pem # use SSL/TLS certificate\njupyter server password # enter a password to protect the server\n'#
- external_connection_dir#
The directory to look at for external kernel connection files, if allow_external_kernels is True. Defaults to Jupyter runtime_dir/external_kernels. Make sure that this directory is not filled with left-over connection files, that could result in unnecessary kernel manager creations.
- extra_services#
handlers that should be loaded at higher priority than the default services
- extra_static_paths#
Extra paths to search for serving static files.
This allows adding javascript/css to be available from the Jupyter server machine, or overriding individual files in the IPython
- extra_template_paths#
Extra paths to search for serving jinja templates.
Can be used to override templates from jupyter_server.templates.
- file_to_run#
Open the named file when the application is launched.
- file_url_prefix#
The URL prefix where files are opened directly.
- flags: StrDict#
An instance of a Python dict.
One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.
Changed in version 5.0: Added key_trait for validating dict keys.
Changed in version 5.0: Deprecated ambiguous
trait
,traits
args in favor ofvalue_trait
,per_key_traits
.
- get_secure_cookie_kwargs#
DEPRECATED. Use IdentityProvider.get_secure_cookie_kwargs
- property http_server: HTTPServer#
An instance of Tornado’s HTTPServer class for the Server Web Application.
- identity_provider_class#
The identity provider class to use.
- info_file#
A trait for unicode strings.
- init_httpserver()#
Creates an instance of a Tornado HTTPServer for the Server Web Application and sets the http_server attribute.
- Return type:
- init_ioloop()#
init self.io_loop so that an extension can use it by io_loop.call_later() to create background tasks
- Return type:
- init_server_extensions()#
If an extension’s metadata includes an ‘app’ key, the value must be a subclass of ExtensionApp. An instance of the class will be created at this step. The config for this instance will inherit the ServerApp’s config object and load its own config.
- Return type:
- initialize(argv=None, find_extensions=True, new_httpserver=True, starter_extension=None)#
Initialize the Server application class, configurables, web application, and http server.
- Parameters:
argv (list or None) – CLI arguments to parse.
find_extensions (bool) – If True, find and load extensions listed in Jupyter config paths. If False, only load extensions that are passed to ServerApp directly through the
argv
,config
, orjpserver_extensions
arguments.new_httpserver (bool) – If True, a tornado HTTPServer instance will be created and configured for the Server Web Application. This will set the http_server attribute of this class.
starter_extension (str) – If given, it references the name of an extension point that started the Server. We will try to load configuration from extension point
- Return type:
- iopub_data_rate_limit#
DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_data_rate_limit
- iopub_msg_rate_limit#
DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_msg_rate_limit
- ip#
The IP address the Jupyter server will listen on.
- jinja_environment_options#
Supply extra arguments that will be passed to Jinja environment.
- jinja_template_vars#
Extra variables to supply to jinja templates when rendering.
- jpserver_extensions#
Dict of Python modules to load as Jupyter server extensions.Entry values can be used to enable and disable the loading ofthe extensions. The extensions will be loaded in alphabetical order.
- kernel_manager_class#
The kernel manager class to use.
- kernel_spec_manager#
A trait whose value must be an instance of a specified class.
The value can also be an instance of a subclass of the specified class.
Subclasses can declare default classes by overriding the klass attribute
- kernel_spec_manager_class#
The kernel spec manager class to use. Should be a subclass of
jupyter_client.kernelspec.KernelSpecManager
.The Api of KernelSpecManager is provisional and might change without warning between this version of Jupyter and the next stable one.
- kernel_websocket_connection_class#
The kernel websocket connection class to use.
- kernel_ws_protocol#
DEPRECATED. Use ZMQChannelsWebsocketConnection.kernel_ws_protocol
- keyfile#
The full path to a private key file for usage with SSL/TLS.
- limit_rate#
DEPRECATED. Use ZMQChannelsWebsocketConnection.limit_rate
- load_server_extensions()#
Load any extensions specified by config.
Import the module, then call the load_jupyter_server_extension function, if one exists.
The extension API is experimental, and may change in future releases.
- Return type:
- local_hostnames#
Hostnames to allow as local when allow_remote_access is False.
Local IP addresses (such as 127.0.0.1 and ::1) are automatically accepted as local as well.
- login_handler_class#
The login handler class to use.
- logout_handler_class#
The logout handler class to use.
- max_body_size#
Sets the maximum allowed size of the client request body, specified in the Content-Length request header field. If the size in a request exceeds the configured value, a malformed HTTP message is returned to the client.
Note: max_body_size is applied even in streaming mode.
- max_buffer_size#
Gets or sets the maximum amount of memory, in bytes, that is allocated for use by the buffer manager.
- min_open_files_limit#
Gets or sets a lower bound on the open file handles process resource limit. This may need to be increased if you run into an OSError: [Errno 24] Too many open files. This is not applicable when running on Windows.
- name: str | Unicode[str, str | bytes] = 'jupyter-server'#
- no_browser_open_file#
If True, do not write redirect HTML file disk, or show in messages.
- notebook_dir#
DEPRECATED, use root_dir.
- open_browser#
Whether to open in a browser after starting. The specific browser used is platform dependent and determined by the python standard library
webbrowser
module, unless it is overridden using the –browser (ServerApp.browser) configuration option.
- password#
DEPRECATED in 2.0. Use PasswordIdentityProvider.hashed_password
- password_required#
DEPRECATED in 2.0. Use PasswordIdentityProvider.password_required
- port#
JUPYTER_PORT).
- Type:
The port the server will listen on (env
- port_default_value = 8888#
- port_env = 'JUPYTER_PORT'#
- port_retries#
JUPYTER_PORT_RETRIES).
- Type:
The number of additional ports to try if the specified port is not available (env
- port_retries_default_value = 50#
- port_retries_env = 'JUPYTER_PORT_RETRIES'#
- preferred_dir#
Preferred starting directory to use for notebooks and kernels. ServerApp.preferred_dir is deprecated in jupyter-server 2.0. Use FileContentsManager.preferred_dir instead
- pylab#
use %pylab or %matplotlib in the notebook to enable matplotlib.
- Type:
DISABLED
- quit_button#
If True, display controls to shut down the Jupyter server, such as menu items or buttons.
- rate_limit_window#
DEPRECATED. Use ZMQChannelsWebsocketConnection.rate_limit_window
- remove_browser_open_file()#
Remove the jpserver-<pid>-open.html file created for this server.
Ignores the error raised when the file has already been removed.
- Return type:
- remove_browser_open_files()#
Remove the
browser_open_file
andbrowser_open_file_to_run
files created for this server.Ignores the error raised when the file has already been removed.
- Return type:
- remove_server_info_file()#
Remove the jpserver-<pid>.json file created for this server.
Ignores the error raised when the file has already been removed.
- Return type:
- reraise_server_extension_failures#
Reraise exceptions encountered loading server extensions?
- root_dir#
The directory to use for notebooks and kernels.
- running_server_info(kernel_count=True)#
Return the current working directory and the server url information
- Return type:
- session_manager_class#
The session manager class to use.
- shutdown_no_activity()#
Shutdown server on timeout when there are no kernels or terminals.
- Return type:
- shutdown_no_activity_timeout#
it may shut down up to a minute later. 0 (the default) disables this automatic shutdown.
- Type:
Shut down the server after N seconds with no kernelsrunning and no activity. This can be used together with culling idle kernels (MappingKernelManager.cull_idle_timeout) to shutdown the Jupyter server when it’s not in use. This is not precisely timed
- sock#
The UNIX socket the Jupyter server will listen on.
- sock_mode#
0600).
- Type:
The permissions mode for UNIX socket creation (default
- ssl_options#
Supply SSL options for the tornado HTTPServer. See the tornado docs for details.
- start()#
Start the Jupyter server app, after initialization
This method takes no arguments so all configuration and initialization must be done prior to calling this method.
- Return type:
- static_custom_path#
Path to search for custom.js, css
- static_immutable_cache#
Paths to set up static files as immutable.
This allow setting up the cache control of static files as immutable. It should be used for static file named with a hash for instance.
- subcommands: dict[str, t.Any] = {'extension': (<class 'jupyter_server.extension.serverextension.ServerExtensionApp'>, 'Work with Jupyter server extensions'), 'list': (<class 'jupyter_server.serverapp.JupyterServerListApp'>, 'List currently running Jupyter servers.'), 'password': (<class 'jupyter_server.serverapp.JupyterPasswordApp'>, 'Set a password for the Jupyter server.'), 'stop': (<class 'jupyter_server.serverapp.JupyterServerStopApp'>, 'Stop currently running Jupyter server for a given port')}#
- terminado_settings#
Supply overrides for terminado. Currently only supports “shell_command”.
- terminals_enabled#
Set to False to disable terminals.
This does not make the server more secure by itself. Anything the user can in a terminal, they can also do in a notebook.
Terminals may also be automatically disabled if the terminado package is not available.
- token#
DEPRECATED. Use IdentityProvider.token
- tornado_settings#
Supply overrides for the tornado.web.Application that the Jupyter server uses.
- trust_xheaders#
Whether to trust or not X-Scheme/X-Forwarded-Proto and X-Real-Ip/X-Forwarded-For headerssent by the upstream reverse proxy. Necessary if the proxy handles SSL
- use_redirect_file#
Disable launching browser by redirect file For versions of notebook > 5.7.2, a security feature measure was added that prevented the authentication token used to launch the browser from being visible. This feature makes it difficult for other users on a multi-user system from running code in your Jupyter session as you. However, some environments (like Windows Subsystem for Linux (WSL) and Chromebooks), launching a browser using a redirect file can lead the browser failing to load. This is because of the difference in file structures/paths between the runtime and the browser.
Disabling this setting to False will disable this behavior, allowing the browser to launch by using a URL and visible token (as before).
- version: str = '2.14.0'#
- webbrowser_open_new#
Specify where to open the server on startup. This is the
new
argument passed to the standard library methodwebbrowser.open
. The behaviour is not guaranteed, but depends on browser support. Valid values are:2 opens a new tab,
1 opens a new window,
0 opens in an existing window.
See the
webbrowser.open
documentation for details.
- websocket_compression_options#
Set the tornado compression options for websocket connections.
This value will be returned from
WebSocketHandler.get_compression_options()
. None (default) will disable compression. A dict (even an empty one) will enable compression.See the tornado docs for WebSocketHandler.get_compression_options for details.
- websocket_ping_interval#
Configure the websocket ping interval in seconds.
Websockets are long-lived connections that are used by some Jupyter Server extensions.
Periodic pings help to detect disconnected clients and keep the connection active. If this is set to None, then no pings will be performed.
When a ping is sent, the client has
websocket_ping_timeout
seconds to respond. If no response is received within this period, the connection will be closed from the server side.
- websocket_ping_timeout#
Configure the websocket ping timeout in seconds.
See
websocket_ping_interval
for details.
- websocket_url#
The base URL for websockets, if it differs from the HTTP server (hint: it almost certainly doesn’t).
Should be in the form of an HTTP origin: ws[s]://hostname[:port]
- write_browser_open_file()#
Write an jpserver-<pid>-open.html file
This can be used to open the notebook in a browser
- Return type:
- write_browser_open_files()#
Write an
browser_open_file
andbrowser_open_file_to_run
filesThis can be used to open a file directly in a browser.
- Return type:
- class jupyter_server.serverapp.ServerWebApplication(jupyter_app, default_services, kernel_manager, contents_manager, session_manager, kernel_spec_manager, config_manager, event_logger, extra_services, log, base_url, default_url, settings_overrides, jinja_env_options, *, authorizer=None, identity_provider=None, kernel_websocket_connection_class=None, websocket_ping_interval=None, websocket_ping_timeout=None)#
Bases:
Application
A server web application.
- add_handlers(host_pattern, host_handlers)#
Appends the given handlers to our handler list.
Host patterns are processed sequentially in the order they were added. All matching patterns will be considered.
- init_handlers(default_services, settings)#
Load the (URL pattern, handler) tuples for each component.
- init_settings(jupyter_app, kernel_manager, contents_manager, session_manager, kernel_spec_manager, config_manager, event_logger, extra_services, log, base_url, default_url, settings_overrides, jinja_env_options=None, *, authorizer=None, identity_provider=None, kernel_websocket_connection_class=None, websocket_ping_interval=None, websocket_ping_timeout=None)#
Initialize settings for the web application.
- last_activity()#
Get a UTC timestamp for when the server last did something.
Includes: API activity, kernel activity, kernel shutdown, and terminal activity.
- jupyter_server.serverapp.list_running_servers(runtime_dir=None, log=None)#
Iterate over the server info files of running Jupyter servers.
Given a runtime directory, find jpserver-* files in the security directory, and yield dicts of their information, each one pertaining to a currently running Jupyter server instance.
- jupyter_server.serverapp.load_handlers(name)#
Load the (URL pattern, handler) tuples for each component.
- Return type:
- jupyter_server.serverapp.random_ports(port, n)#
Generate a list of n random ports near the given port.
The first 5 ports will be sequential, and the remaining n-5 will be randomly selected in the range [port-2*n, port+2*n].
- jupyter_server.serverapp.shutdown_server(server_info, timeout=5, log=None)#
Shutdown a Jupyter server in a separate process.
server_info should be a dictionary as produced by list_running_servers().
Will first try to request shutdown using /api/shutdown . On Unix, if the server is still running after timeout seconds, it will send SIGTERM. After another timeout, it escalates to SIGKILL.
Returns True if the server was stopped by any means, False if stopping it failed (on Windows).
Custom trait types.
- class jupyter_server.traittypes.InstanceFromClasses(klasses=None, args=None, kw=None, **kwargs)#
Bases:
ClassBasedTraitType
A trait whose value must be an instance of a class in a specified list of classes. The value can also be an instance of a subclass of the specified classes. Subclasses can declare default classes by overriding the klass attribute
- default_value_repr()#
Get the default value repr.
- from_string(s)#
Convert from a string.
- info()#
Get the trait info.
- instance_from_importable_klasses(value)#
Check that a given class is a subclasses found in the klasses list.
- instance_init(obj)#
Initialize the trait.
- make_dynamic_default()#
Make the dynamic default for the trait.
- validate(obj, value)#
Validate an instance.
- class jupyter_server.traittypes.TypeFromClasses(default_value=traitlets.Undefined, klasses=None, **kwargs)#
Bases:
ClassBasedTraitType
A trait whose value must be a subclass of a class in a specified list of classes.
- default_value_repr()#
The default value repr.
- info()#
Returns a description of the trait.
- instance_init(obj)#
Initialize an instance.
- subclass_from_klasses(value)#
Check that a given class is a subclasses found in the klasses list.
- validate(obj, value)#
Validates that the value is a valid object instance.
Translation related utilities. When imported, injects _ to builtins
Notebook related utilities
- exception jupyter_server.utils.JupyterServerAuthWarning#
Bases:
RuntimeWarning
Emitted when authentication configuration issue is detected.
Intended for filtering out expected warnings in tests, including downstream tests, rather than for users to silence this warning.
- async jupyter_server.utils.async_fetch(urlstring, method='GET', body=None, headers=None, io_loop=None)#
Send an asynchronous HTTP, HTTPS, or HTTP+UNIX request to a Tornado Web Server. Returns a tornado HTTPResponse.
- Return type:
- jupyter_server.utils.check_version(v, check)#
check version string v >= check
If dev/prerelease tags result in TypeError for string-number comparison, it is assumed that the dependency is satisfied. Users on dev branches are responsible for keeping their own packages up to date.
- Return type:
- jupyter_server.utils.expand_path(s)#
Expand $VARS and ~names in a string, like a shell
- Examples:
In [2]: os.environ[‘FOO’]=’test’ In [3]: expand_path(‘variable FOO is $FOO’) Out[3]: ‘variable FOO is test’
- Return type:
- jupyter_server.utils.fetch(urlstring, method='GET', body=None, headers=None)#
Send a HTTP, HTTPS, or HTTP+UNIX request to a Tornado Web Server. Returns a tornado HTTPResponse.
- Return type:
- jupyter_server.utils.filefind(filename, path_dirs=None)#
Find a file by looking through a sequence of paths. This iterates through a sequence of paths looking for a file and returns the full, absolute path of the first occurrence of the file. If no set of path dirs is given, the filename is tested as is, after running through
expandvars()
andexpanduser()
. Thus a simple call:filefind("myfile.txt")
will find the file in the current working dir, but:
filefind("~/myfile.txt")
Will find the file in the users home directory. This function does not automatically try any paths, such as the cwd or the user’s home directory.
- Parameters:
filename (str) – The filename to look for.
path_dirs (str, None or sequence of str) – The sequence of paths to look for the file in. If None, the filename need to be absolute or be in the cwd. If a string, the string is put into a sequence and the searched. If a sequence, walk through each element and join with
filename
, callingexpandvars()
andexpanduser()
before testing for existence.
- Return type:
Raises
IOError
or returns absolute path to file.
- jupyter_server.utils.import_item(name)#
Import and return
bar
given the stringfoo.bar
. Callingbar = import_item("foo.bar")
is the functional equivalent of executing the codefrom foo import bar
. :type name:str
:param name: The fully qualified name of the module/package being imported. :type name: str- Returns:
mod – The module that was imported.
- Return type:
module object
- jupyter_server.utils.is_namespace_package(namespace)#
Is the provided namespace a Python Namespace Package (PEP420).
https://www.python.org/dev/peps/pep-0420/#specification
Returns
None
if module is not importable.- Return type:
bool | None
- async jupyter_server.utils.run_sync_in_loop(maybe_async)#
DEPRECATED: Use
ensure_async
from jupyter_core instead.
- jupyter_server.utils.samefile_simple(path, other_path)#
Fill in for os.path.samefile when it is unavailable (Windows+py2).
Do a case-insensitive string comparison in this case plus comparing the full stat result (including times) because Windows + py2 doesn’t support the stat fields needed for identifying if it’s the same file (st_ino, st_dev).
Only to be used if os.path.samefile is not available.
- jupyter_server.utils.to_api_path(os_path, root='')#
Convert a filesystem path to an API path
If given, root will be removed from the path. root must be a filesystem path already.
- Return type:
NewType()
(ApiPath
,str
)
- jupyter_server.utils.to_os_path(path, root='')#
Convert an API path to a filesystem path
If given, root will be prepended to the path. root must be a filesystem path already.
- Return type:
- jupyter_server.utils.unix_socket_in_use(socket_path)#
Checks whether a UNIX socket path on disk is in use by attempting to connect to it.
- Return type:
- jupyter_server.utils.url_escape(path)#
Escape special characters in a URL path
Turns ‘/foo bar/’ into ‘/foo%20bar/’
- Return type:
- jupyter_server.utils.url_is_absolute(url)#
Determine whether a given URL is absolute
- Return type:
- jupyter_server.utils.url_path_join(*pieces)#
Join components of url into a relative url
Use to prevent double slash when joining subpath. This will leave the initial and final / in place
- Return type:
- jupyter_server.utils.url_unescape(path)#
Unescape special characters in a URL path
Turns ‘/foo%20bar/’ into ‘/foo bar/’
- Return type:
- jupyter_server.utils.urldecode_unix_socket_path(socket_path)#
Decodes a UNIX sock path string from an encoded sock path for the
http+unix
URI form.- Return type:
- jupyter_server.utils.urlencode_unix_socket(socket_path)#
Encodes a UNIX socket URL from a socket path for the
http+unix
URI form.- Return type:
Module contents#
The Jupyter Server
- class jupyter_server.CallContext#
Bases:
object
CallContext essentially acts as a namespace for managing context variables.
Although not required, it is recommended that any “file-spanning” context variable names (i.e., variables that will be set or retrieved from multiple files or services) be added as constants to this class definition.
- classmethod context_variable_names()#
Returns a list of variable names set for this call context.
- Returns:
names – A list of variable names set for this call context.
- Return type:
List[str]
- classmethod get(name)#
Returns the value corresponding the named variable relative to this context.
If the named variable doesn’t exist, None will be returned.
- Parameters:
name (str) – The name of the variable to get from the call context
- Returns:
value – The value associated with the named variable for this call context
- Return type:
Any
Documentation for Contributors#
These pages target people who are interested in contributing directly to the Jupyter Server Project.
Team Meetings, Road Map and Calendar#
Many of the lead Jupyter Server developers meet weekly over Zoom. These meetings are open to everyone.
To see when the next meeting is happening and how to attend, watch this Github issue:
jupyter-server/team-compass#15
Meeting Notes#
Roadmap#
Also check out Jupyter Server’s roadmap where we track future plans for Jupyter Server:
Jupyter Calendar#
General Jupyter contributor guidelines#
If you’re reading this section, you’re probably interested in contributing to Jupyter. Welcome and thanks for your interest in contributing!
Please take a look at the Contributor documentation, familiarize yourself with using the Jupyter Server, and introduce yourself on the mailing list and share what area of the project you are interested in working on.
For general documentation about contributing to Jupyter projects, see the Project Jupyter Contributor Documentation.
Setting Up a Development Environment#
Installing the Jupyter Server#
The development version of the server requires node and pip.
Once you have installed the dependencies mentioned above, use the following steps:
pip install --upgrade pip
git clone https://github.com/jupyter/jupyter_server
cd jupyter_server
pip install -e ".[test]"
If you are using a system-wide Python installation and you only want to install the server for you,
you can add --user
to the install commands.
Once you have done this, you can launch the main branch of Jupyter server from any directory in your system with:
jupyter server
Code Styling and Quality Checks#
jupyter_server
has adopted automatic code formatting so you shouldn’t
need to worry too much about your code style.
As long as your code is valid,
the pre-commit hook should take care of how it should look.
pre-commit
and its associated hooks will automatically be installed when
you run pip install -e ".[test]"
To install pre-commit
hook manually, run the following:
pre-commit install
You can invoke the pre-commit hook by hand at any time with:
pre-commit run
which should run any autoformatting on your code and tell you about any errors it couldn’t fix automatically. You may also install black integration into your text editor to format code automatically.
If you have already committed files before setting up the pre-commit
hook with pre-commit install
, you can fix everything up using
pre-commit run --all-files
. You need to make the fixing commit
yourself after that.
Some of the hooks only run on CI by default, but you can invoke them by
running with the --hook-stage manual
argument.
There are three hatch scripts that can be run locally as well:
hatch run lint:build
will enforce styling. hatch run typing:test
will
run the type checker.
Troubleshooting the Installation#
If you do not see that your Jupyter Server is not running on dev mode, it’s possible that you are running other instances of Jupyter Server. You can try the following steps:
Uninstall all instances of the jupyter_server package. These include any installations you made using pip or conda
Run
python -m pip install -e .
in the jupyter_server repository to install the jupyter_server from thereRun
npm run build
to make sure the Javascript and CSS are updated and compiledLaunch with
python -m jupyter_server --port 8989
, and check that the browser is pointing tolocalhost:8989
(rather than the default 8888). You don’t necessarily have to launch with port 8989, as long as you use a port that is neither the default nor in use, then it should be fine.Verify the installation with the steps in the previous section.
Running Tests#
Install dependencies:
pip install -e .[test]
pip install -e examples/simple # to test the examples
To run the Python tests, use:
pytest
pytest examples/simple # to test the examples
You can also run the tests using hatch
without installing test dependencies in your local environment:
pip install hatch
hatch run test:test
The command takes any argument that you can give to pytest
, e.g.:
hatch run test:test -k name_of_method_to_test
You can also drop into a shell in the test environment by running:
hatch -e test shell
Building the Docs#
Install the docs requirements using pip
:
pip install .[doc]
Once you have installed the required packages, you can build the docs with:
cd docs
make html
You can also run the tests using hatch
without installing test dependencies
in your local environment.
pip install hatch hatch run docs:build
You can also drop into a shell in the docs environment by running:
hatch -e docs shell
After that, the generated HTML files will be available at
build/html/index.html
. You may view the docs in your browser.
Windows users can find make.bat
in the docs
folder.
You should also have a look at the Project Jupyter Documentation Guide.
Other helpful documentation#
List of helpful links#
Frequently asked questions#
Here is a list of questions we think you might have. This list will always be growing, so please feel free to add your question+anwer to this page! 🚀
Can I configure multiple extensions at once?#
Checkout our “Operator” docs on how to configure extensions. 📕
Config file and command line options#
The Jupyter Server can be run with a variety of command line arguments. A list of available options can be found below in the options section.
Defaults for these options can also be set by creating a file named
jupyter_server_config.py
in your Jupyter folder. The Jupyter
folder is in your home directory, ~/.jupyter
.
To create a jupyter_server_config.py
file, with all the defaults
commented out, you can use the following command line:
$ jupyter server --generate-config
Options#
This list of options can be generated by running the following and hitting enter:
$ jupyter server --help-all
- Application.log_datefmtUnicode
Default:
'%Y-%m-%d %H:%M:%S'
The date format used by logging formatters for %(asctime)s
- Application.log_formatUnicode
Default:
'[%(name)s]%(highlevel)s %(message)s'
The Logging format template
- Application.log_levelany of
0``|``10``|``20``|``30``|``40``|``50``|
’DEBUG’|
’INFO’|
’WARN’|
’ERROR’|
’CRITICAL’`` Default:
30
Set the log level by value or name.
- Application.logging_configDict
Default:
{}
Configure additional log handlers.
The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings.
This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers.
If provided this should be a logging configuration dictionary, for more information see: https://docs.python.org/3/library/logging.config.html#logging-config-dictschema
This dictionary is merged with the base logging configuration which defines the following:
A logging formatter intended for interactive use called
console
.A logging handler that writes to stderr called
console
which uses the formatterconsole
.A logger with the name of this application set to
DEBUG
level.
This example adds a new handler that writes to a file:
c.Application.logging_config = { "handlers": { "file": { "class": "logging.FileHandler", "level": "DEBUG", "filename": "<path/to/file>", } }, "loggers": { "<application-name>": { "level": "DEBUG", # NOTE: if you don't list the default "console" # handler here then it will be disabled "handlers": ["console", "file"], }, }, }
- Application.show_configBool
Default:
False
Instead of starting the Application, dump configuration to stdout
- Application.show_config_jsonBool
Default:
False
Instead of starting the Application, dump configuration to stdout (as JSON)
- JupyterApp.answer_yesBool
Default:
False
Answer yes to any prompts.
- JupyterApp.config_fileUnicode
Default:
''
Full path of a config file.
- JupyterApp.config_file_nameUnicode
Default:
''
Specify a config file to load.
- JupyterApp.generate_configBool
Default:
False
Generate default config file.
- JupyterApp.log_datefmtUnicode
Default:
'%Y-%m-%d %H:%M:%S'
The date format used by logging formatters for %(asctime)s
- JupyterApp.log_formatUnicode
Default:
'[%(name)s]%(highlevel)s %(message)s'
The Logging format template
- JupyterApp.log_levelany of
0``|``10``|``20``|``30``|``40``|``50``|
’DEBUG’|
’INFO’|
’WARN’|
’ERROR’|
’CRITICAL’`` Default:
30
Set the log level by value or name.
- JupyterApp.logging_configDict
Default:
{}
Configure additional log handlers.
The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings.
This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers.
If provided this should be a logging configuration dictionary, for more information see: https://docs.python.org/3/library/logging.config.html#logging-config-dictschema
This dictionary is merged with the base logging configuration which defines the following:
A logging formatter intended for interactive use called
console
.A logging handler that writes to stderr called
console
which uses the formatterconsole
.A logger with the name of this application set to
DEBUG
level.
This example adds a new handler that writes to a file:
c.Application.logging_config = { "handlers": { "file": { "class": "logging.FileHandler", "level": "DEBUG", "filename": "<path/to/file>", } }, "loggers": { "<application-name>": { "level": "DEBUG", # NOTE: if you don't list the default "console" # handler here then it will be disabled "handlers": ["console", "file"], }, }, }
- JupyterApp.show_configBool
Default:
False
Instead of starting the Application, dump configuration to stdout
- JupyterApp.show_config_jsonBool
Default:
False
Instead of starting the Application, dump configuration to stdout (as JSON)
- ServerApp.allow_credentialsBool
Default:
False
Set the Access-Control-Allow-Credentials: true header
- ServerApp.allow_external_kernelsBool
Default:
False
Whether or not to allow external kernels, whose connection files are placed in external_connection_dir.
- ServerApp.allow_originUnicode
Default:
''
Set the Access-Control-Allow-Origin header
Use ‘*’ to allow any origin to access your server.
Takes precedence over allow_origin_pat.
- ServerApp.allow_origin_patUnicode
Default:
''
Use a regular expression for the Access-Control-Allow-Origin header
Requests from an origin matching the expression will get replies with:
Access-Control-Allow-Origin: origin
where
origin
is the origin of the request.Ignored if allow_origin is set.
- ServerApp.allow_password_changeBool
Default:
True
DEPRECATED in 2.0. Use PasswordIdentityProvider.allow_password_change
- ServerApp.allow_remote_accessBool
Default:
False
Allow requests where the Host header doesn’t point to a local server
By default, requests get a 403 forbidden response if the ‘Host’ header shows that the browser thinks it’s on a non-local domain. Setting this option to True disables this check.
This protects against ‘DNS rebinding’ attacks, where a remote web server serves you a page and then changes its DNS to send later requests to a local IP, bypassing same-origin checks.
Local IP addresses (such as 127.0.0.1 and ::1) are allowed as local, along with hostnames configured in local_hostnames.
- ServerApp.allow_rootBool
Default:
False
Whether to allow the user to run the server as root.
- ServerApp.allow_unauthenticated_accessBool
Default:
True
Allow unauthenticated access to endpoints without authentication rule.
When set to
True
(default in jupyter-server 2.0, subject to change in the future), any request to an endpoint without an authentication rule (either@tornado.web.authenticated
, or@allow_unauthenticated
) will be permitted, regardless of whether user has logged in or not.When set to
False
, logging in will be required for access to each endpoint, excluding the endpoints marked with@allow_unauthenticated
decorator.This option can be configured using
JUPYTER_SERVER_ALLOW_UNAUTHENTICATED_ACCESS
environment variable: any non-empty value other than “true” and “yes” will prevent unauthenticated access to endpoints without@allow_unauthenticated
.- ServerApp.answer_yesBool
Default:
False
Answer yes to any prompts.
- ServerApp.authenticate_prometheusBool
Default:
True
- “
Require authentication to access prometheus metrics.
- ServerApp.authorizer_classType
Default:
'jupyter_server.auth.authorizer.AllowAllAuthorizer'
The authorizer class to use.
- ServerApp.autoreloadBool
Default:
False
Reload the webapp when changes are made to any Python src files.
- ServerApp.base_urlUnicode
Default:
'/'
The base URL for the Jupyter server.
Leading and trailing slashes can be omitted, and will automatically be added.
- ServerApp.browserUnicode
Default:
''
- Specify what command to use to invoke a web
browser when starting the server. If not specified, the default browser will be determined by the
webbrowser
standard library module, which allows setting of the BROWSER environment variable to override it.
- ServerApp.certfileUnicode
Default:
''
The full path to an SSL/TLS certificate file.
- ServerApp.client_caUnicode
Default:
''
The full path to a certificate authority certificate for SSL/TLS client authentication.
- ServerApp.config_fileUnicode
Default:
''
Full path of a config file.
- ServerApp.config_file_nameUnicode
Default:
''
Specify a config file to load.
- ServerApp.config_manager_classType
Default:
'jupyter_server.services.config.manager.ConfigManager'
The config manager class to use
- ServerApp.contents_manager_classType
Default:
'jupyter_server.services.contents.largefilemanager.AsyncLarge...
The content manager class to use.
- ServerApp.cookie_optionsDict
Default:
{}
DEPRECATED. Use IdentityProvider.cookie_options
- ServerApp.cookie_secretBytes
Default:
b''
- The random bytes used to secure cookies.
By default this is a new random number every time you start the server. Set it to a value in a config file to enable logins to persist across server sessions.
Note: Cookie secrets should be kept private, do not share config files with cookie_secret stored in plaintext (you can read the value from a file).
- ServerApp.cookie_secret_fileUnicode
Default:
''
The file where the cookie secret is stored.
- ServerApp.custom_display_urlUnicode
Default:
''
Override URL shown to users.
Replace actual URL, including protocol, address, port and base URL, with the given value when displaying URL to the users. Do not change the actual connection URL. If authentication token is enabled, the token is added to the custom URL automatically.
This option is intended to be used when the URL to display to the user cannot be determined reliably by the Jupyter server (proxified or containerized setups for example).
- ServerApp.default_urlUnicode
Default:
'/'
The default URL to redirect to from
/
- ServerApp.disable_check_xsrfBool
Default:
False
Disable cross-site-request-forgery protection
Jupyter server includes protection from cross-site request forgeries, requiring API requests to either:
originate from pages served by this server (validated with XSRF cookie and token), or
authenticate with a token
Some anonymous compute resources still desire the ability to run code, completely without authentication. These services can disable all authentication and security checks, with the full knowledge of what that implies.
- ServerApp.external_connection_dirUnicode
Default:
None
The directory to look at for external kernel connection files, if allow_external_kernels is True. Defaults to Jupyter runtime_dir/external_kernels. Make sure that this directory is not filled with left-over connection files, that could result in unnecessary kernel manager creations.
- ServerApp.extra_servicesList
Default:
[]
handlers that should be loaded at higher priority than the default services
- ServerApp.extra_static_pathsList
Default:
[]
Extra paths to search for serving static files.
This allows adding javascript/css to be available from the Jupyter server machine, or overriding individual files in the IPython
- ServerApp.extra_template_pathsList
Default:
[]
Extra paths to search for serving jinja templates.
Can be used to override templates from jupyter_server.templates.
- ServerApp.file_to_runUnicode
Default:
''
Open the named file when the application is launched.
- ServerApp.file_url_prefixUnicode
Default:
'notebooks'
The URL prefix where files are opened directly.
- ServerApp.generate_configBool
Default:
False
Generate default config file.
- ServerApp.get_secure_cookie_kwargsDict
Default:
{}
DEPRECATED. Use IdentityProvider.get_secure_cookie_kwargs
- ServerApp.identity_provider_classType
Default:
'jupyter_server.auth.identity.PasswordIdentityProvider'
The identity provider class to use.
- ServerApp.iopub_data_rate_limitFloat
Default:
0.0
DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_data_rate_limit
- ServerApp.iopub_msg_rate_limitFloat
Default:
0.0
DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_msg_rate_limit
- ServerApp.ipUnicode
Default:
'localhost'
The IP address the Jupyter server will listen on.
- ServerApp.jinja_environment_optionsDict
Default:
{}
Supply extra arguments that will be passed to Jinja environment.
- ServerApp.jinja_template_varsDict
Default:
{}
Extra variables to supply to jinja templates when rendering.
- ServerApp.jpserver_extensionsDict
Default:
{}
Dict of Python modules to load as Jupyter server extensions.Entry values can be used to enable and disable the loading ofthe extensions. The extensions will be loaded in alphabetical order.
- ServerApp.kernel_manager_classType
Default:
'jupyter_server.services.kernels.kernelmanager.MappingKernelM...
The kernel manager class to use.
- ServerApp.kernel_spec_manager_classType
Default:
'builtins.object'
The kernel spec manager class to use. Should be a subclass of
jupyter_client.kernelspec.KernelSpecManager
.The Api of KernelSpecManager is provisional and might change without warning between this version of Jupyter and the next stable one.
- ServerApp.kernel_websocket_connection_classType
Default:
'jupyter_server.services.kernels.connection.base.BaseKernelWe...
The kernel websocket connection class to use.
- ServerApp.kernel_ws_protocolUnicode
Default:
''
DEPRECATED. Use ZMQChannelsWebsocketConnection.kernel_ws_protocol
- ServerApp.keyfileUnicode
Default:
''
The full path to a private key file for usage with SSL/TLS.
- ServerApp.limit_rateBool
Default:
False
DEPRECATED. Use ZMQChannelsWebsocketConnection.limit_rate
- ServerApp.local_hostnamesList
Default:
['localhost']
Hostnames to allow as local when allow_remote_access is False.
Local IP addresses (such as 127.0.0.1 and ::1) are automatically accepted as local as well.
- ServerApp.log_datefmtUnicode
Default:
'%Y-%m-%d %H:%M:%S'
The date format used by logging formatters for %(asctime)s
- ServerApp.log_formatUnicode
Default:
'[%(name)s]%(highlevel)s %(message)s'
The Logging format template
- ServerApp.log_levelany of
0``|``10``|``20``|``30``|``40``|``50``|
’DEBUG’|
’INFO’|
’WARN’|
’ERROR’|
’CRITICAL’`` Default:
30
Set the log level by value or name.
- ServerApp.logging_configDict
Default:
{}
Configure additional log handlers.
The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings.
This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers.
If provided this should be a logging configuration dictionary, for more information see: https://docs.python.org/3/library/logging.config.html#logging-config-dictschema
This dictionary is merged with the base logging configuration which defines the following:
A logging formatter intended for interactive use called
console
.A logging handler that writes to stderr called
console
which uses the formatterconsole
.A logger with the name of this application set to
DEBUG
level.
This example adds a new handler that writes to a file:
c.Application.logging_config = { "handlers": { "file": { "class": "logging.FileHandler", "level": "DEBUG", "filename": "<path/to/file>", } }, "loggers": { "<application-name>": { "level": "DEBUG", # NOTE: if you don't list the default "console" # handler here then it will be disabled "handlers": ["console", "file"], }, }, }
- ServerApp.login_handler_classType
Default:
'jupyter_server.auth.login.LegacyLoginHandler'
The login handler class to use.
- ServerApp.logout_handler_classType
Default:
'jupyter_server.auth.logout.LogoutHandler'
The logout handler class to use.
- ServerApp.max_body_sizeInt
Default:
536870912
Sets the maximum allowed size of the client request body, specified in the Content-Length request header field. If the size in a request exceeds the configured value, a malformed HTTP message is returned to the client.
Note: max_body_size is applied even in streaming mode.
- ServerApp.max_buffer_sizeInt
Default:
536870912
Gets or sets the maximum amount of memory, in bytes, that is allocated for use by the buffer manager.
- ServerApp.min_open_files_limitInt
Default:
0
Gets or sets a lower bound on the open file handles process resource limit. This may need to be increased if you run into an OSError: [Errno 24] Too many open files. This is not applicable when running on Windows.
- ServerApp.notebook_dirUnicode
Default:
''
DEPRECATED, use root_dir.
- ServerApp.open_browserBool
Default:
False
- Whether to open in a browser after starting.
The specific browser used is platform dependent and determined by the python standard library
webbrowser
module, unless it is overridden using the –browser (ServerApp.browser) configuration option.
- ServerApp.passwordUnicode
Default:
''
DEPRECATED in 2.0. Use PasswordIdentityProvider.hashed_password
- ServerApp.password_requiredBool
Default:
False
DEPRECATED in 2.0. Use PasswordIdentityProvider.password_required
- ServerApp.portInt
Default:
0
The port the server will listen on (env: JUPYTER_PORT).
- ServerApp.port_retriesInt
Default:
50
The number of additional ports to try if the specified port is not available (env: JUPYTER_PORT_RETRIES).
- ServerApp.preferred_dirUnicode
Default:
''
Preferred starting directory to use for notebooks and kernels. ServerApp.preferred_dir is deprecated in jupyter-server 2.0. Use FileContentsManager.preferred_dir instead
- ServerApp.pylabUnicode
Default:
'disabled'
DISABLED: use %pylab or %matplotlib in the notebook to enable matplotlib.
- ServerApp.quit_buttonBool
Default:
True
If True, display controls to shut down the Jupyter server, such as menu items or buttons.
- ServerApp.rate_limit_windowFloat
Default:
0.0
DEPRECATED. Use ZMQChannelsWebsocketConnection.rate_limit_window
- ServerApp.reraise_server_extension_failuresBool
Default:
False
Reraise exceptions encountered loading server extensions?
- ServerApp.root_dirUnicode
Default:
''
The directory to use for notebooks and kernels.
- ServerApp.session_manager_classType
Default:
'builtins.object'
The session manager class to use.
- ServerApp.show_configBool
Default:
False
Instead of starting the Application, dump configuration to stdout
- ServerApp.show_config_jsonBool
Default:
False
Instead of starting the Application, dump configuration to stdout (as JSON)
- ServerApp.shutdown_no_activity_timeoutInt
Default:
0
Shut down the server after N seconds with no kernelsrunning and no activity. This can be used together with culling idle kernels (MappingKernelManager.cull_idle_timeout) to shutdown the Jupyter server when it’s not in use. This is not precisely timed: it may shut down up to a minute later. 0 (the default) disables this automatic shutdown.
- ServerApp.sockUnicode
Default:
''
The UNIX socket the Jupyter server will listen on.
- ServerApp.sock_modeUnicode
Default:
'0600'
The permissions mode for UNIX socket creation (default: 0600).
- ServerApp.ssl_optionsDict
Default:
{}
- Supply SSL options for the tornado HTTPServer.
See the tornado docs for details.
- ServerApp.static_immutable_cacheList
Default:
[]
Paths to set up static files as immutable.
This allow setting up the cache control of static files as immutable. It should be used for static file named with a hash for instance.
- ServerApp.terminado_settingsDict
Default:
{}
Supply overrides for terminado. Currently only supports “shell_command”.
- ServerApp.terminals_enabledBool
Default:
False
Set to False to disable terminals.
This does not make the server more secure by itself. Anything the user can in a terminal, they can also do in a notebook.
Terminals may also be automatically disabled if the terminado package is not available.
- ServerApp.tokenUnicode
Default:
'<DEPRECATED>'
DEPRECATED. Use IdentityProvider.token
- ServerApp.tornado_settingsDict
Default:
{}
Supply overrides for the tornado.web.Application that the Jupyter server uses.
- ServerApp.trust_xheadersBool
Default:
False
Whether to trust or not X-Scheme/X-Forwarded-Proto and X-Real-Ip/X-Forwarded-For headerssent by the upstream reverse proxy. Necessary if the proxy handles SSL
- ServerApp.use_redirect_fileBool
Default:
True
- Disable launching browser by redirect file
For versions of notebook > 5.7.2, a security feature measure was added that prevented the authentication token used to launch the browser from being visible. This feature makes it difficult for other users on a multi-user system from running code in your Jupyter session as you. However, some environments (like Windows Subsystem for Linux (WSL) and Chromebooks), launching a browser using a redirect file can lead the browser failing to load. This is because of the difference in file structures/paths between the runtime and the browser.
Disabling this setting to False will disable this behavior, allowing the browser to launch by using a URL and visible token (as before).
- ServerApp.webbrowser_open_newInt
Default:
2
- Specify where to open the server on startup. This is the
new
argument passed to the standard library methodwebbrowser.open
. The behaviour is not guaranteed, but depends on browser support. Valid values are:2 opens a new tab,
1 opens a new window,
0 opens in an existing window.
See the
webbrowser.open
documentation for details.
- ServerApp.websocket_compression_optionsAny
Default:
None
Set the tornado compression options for websocket connections.
This value will be returned from
WebSocketHandler.get_compression_options()
. None (default) will disable compression. A dict (even an empty one) will enable compression.See the tornado docs for WebSocketHandler.get_compression_options for details.
- ServerApp.websocket_ping_intervalInt
Default:
0
Configure the websocket ping interval in seconds.
Websockets are long-lived connections that are used by some Jupyter Server extensions.
Periodic pings help to detect disconnected clients and keep the connection active. If this is set to None, then no pings will be performed.
When a ping is sent, the client has
websocket_ping_timeout
seconds to respond. If no response is received within this period, the connection will be closed from the server side.- ServerApp.websocket_ping_timeoutInt
Default:
0
Configure the websocket ping timeout in seconds.
See
websocket_ping_interval
for details.- ServerApp.websocket_urlUnicode
Default:
''
- The base URL for websockets,
if it differs from the HTTP server (hint: it almost certainly doesn’t).
Should be in the form of an HTTP origin: ws[s]://hostname[:port]
- ConnectionFileMixin.connection_fileUnicode
Default:
''
JSON file in which to store connection info [default: kernel-<pid>.json]
This file will contain the IP, ports, and authentication key needed to connect clients to this kernel. By default, this file will be created in the security dir of the current profile, but can be specified by absolute path.
- ConnectionFileMixin.control_portInt
Default:
0
set the control (ROUTER) port [default: random]
- ConnectionFileMixin.hb_portInt
Default:
0
set the heartbeat port [default: random]
- ConnectionFileMixin.iopub_portInt
Default:
0
set the iopub (PUB) port [default: random]
- ConnectionFileMixin.ipUnicode
Default:
''
- Set the kernel’s IP address [default localhost].
If the IP address is something other than localhost, then Consoles on other machines will be able to connect to the Kernel, so be careful!
- ConnectionFileMixin.shell_portInt
Default:
0
set the shell (ROUTER) port [default: random]
- ConnectionFileMixin.stdin_portInt
Default:
0
set the stdin (ROUTER) port [default: random]
- ConnectionFileMixin.transportany of
'tcp'``|
’ipc’`` (case-insensitive) Default:
'tcp'
No description
- KernelManager.autorestartBool
Default:
True
Should we autorestart the kernel if it dies.
- KernelManager.cache_portsBool
Default:
False
True if the MultiKernelManager should cache ports for this KernelManager instance
- KernelManager.connection_fileUnicode
Default:
''
JSON file in which to store connection info [default: kernel-<pid>.json]
This file will contain the IP, ports, and authentication key needed to connect clients to this kernel. By default, this file will be created in the security dir of the current profile, but can be specified by absolute path.
- KernelManager.control_portInt
Default:
0
set the control (ROUTER) port [default: random]
- KernelManager.hb_portInt
Default:
0
set the heartbeat port [default: random]
- KernelManager.iopub_portInt
Default:
0
set the iopub (PUB) port [default: random]
- KernelManager.ipUnicode
Default:
''
- Set the kernel’s IP address [default localhost].
If the IP address is something other than localhost, then Consoles on other machines will be able to connect to the Kernel, so be careful!
- KernelManager.shell_portInt
Default:
0
set the shell (ROUTER) port [default: random]
- KernelManager.shutdown_wait_timeFloat
Default:
5.0
Time to wait for a kernel to terminate before killing it, in seconds. When a shutdown request is initiated, the kernel will be immediately sent an interrupt (SIGINT), followedby a shutdown_request message, after 1/2 of
shutdown_wait_time`it will be sent a terminate (SIGTERM) request, and finally at the end of `shutdown_wait_time
will be killed (SIGKILL). terminate and kill may be equivalent on windows. Note that this value can beoverridden by the in-use kernel provisioner since shutdown times mayvary by provisioned environment.- KernelManager.stdin_portInt
Default:
0
set the stdin (ROUTER) port [default: random]
- KernelManager.transportany of
'tcp'``|
’ipc’`` (case-insensitive) Default:
'tcp'
No description
- Session.buffer_thresholdInt
Default:
1024
Threshold (in bytes) beyond which an object’s buffer should be extracted to avoid pickling.
- Session.check_pidBool
Default:
True
Whether to check PID to protect against calls after fork.
This check can be disabled if fork-safety is handled elsewhere.
- Session.copy_thresholdInt
Default:
65536
Threshold (in bytes) beyond which a buffer should be sent without copying.
- Session.debugBool
Default:
False
Debug output in the Session
- Session.digest_history_sizeInt
Default:
65536
The maximum number of digests to remember.
The digest history will be culled when it exceeds this value.
- Session.item_thresholdInt
Default:
64
- The maximum number of items for a container to be introspected for custom serialization.
Containers larger than this are pickled outright.
- Session.keyCBytes
Default:
b''
execution key, for signing messages.
- Session.keyfileUnicode
Default:
''
path to file containing execution key.
- Session.metadataDict
Default:
{}
Metadata dictionary, which serves as the default top-level metadata dict for each message.
- Session.packerDottedObjectName
Default:
'json'
- The name of the packer for serializing messages.
Should be one of ‘json’, ‘pickle’, or an import name for a custom callable serializer.
- Session.sessionCUnicode
Default:
''
The UUID identifying this session.
- Session.signature_schemeUnicode
Default:
'hmac-sha256'
- The digest scheme used to construct the message signatures.
Must have the form ‘hmac-HASH’.
- Session.unpackerDottedObjectName
Default:
'json'
- The name of the unpacker for unserializing messages.
Only used with custom functions for
packer
.
- Session.usernameUnicode
Default:
'username'
Username for the Session. Default is your system username.
- MultiKernelManager.default_kernel_nameUnicode
Default:
'python3'
The name of the default kernel to start
- MultiKernelManager.kernel_manager_classDottedObjectName
Default:
'jupyter_client.ioloop.IOLoopKernelManager'
- The kernel manager class. This is configurable to allow
subclassing of the KernelManager for customized behavior.
- MultiKernelManager.shared_contextBool
Default:
True
Share a single zmq.Context to talk to all my kernels
- MappingKernelManager.allow_tracebacksBool
Default:
True
Whether to send tracebacks to clients on exceptions.
- MappingKernelManager.allowed_message_typesList
Default:
[]
- White list of allowed kernel message types.
When the list is empty, all message types are allowed.
- MappingKernelManager.buffer_offline_messagesBool
Default:
True
Whether messages from kernels whose frontends have disconnected should be buffered in-memory.
When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.
Disable if long-running kernels will produce too much output while no frontends are connected.
- MappingKernelManager.cull_busyBool
Default:
False
- Whether to consider culling kernels which are busy.
Only effective if cull_idle_timeout > 0.
- MappingKernelManager.cull_connectedBool
Default:
False
- Whether to consider culling kernels which have one or more connections.
Only effective if cull_idle_timeout > 0.
- MappingKernelManager.cull_idle_timeoutInt
Default:
0
- Timeout (in seconds) after which a kernel is considered idle and ready to be culled.
Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.
- MappingKernelManager.cull_intervalInt
Default:
300
The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.
- MappingKernelManager.default_kernel_nameUnicode
Default:
'python3'
The name of the default kernel to start
- MappingKernelManager.kernel_info_timeoutFloat
Default:
60
Timeout for giving up on a kernel (in seconds).
On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).
- MappingKernelManager.kernel_manager_classDottedObjectName
Default:
'jupyter_client.ioloop.IOLoopKernelManager'
- The kernel manager class. This is configurable to allow
subclassing of the KernelManager for customized behavior.
- MappingKernelManager.root_dirUnicode
Default:
''
No description
- MappingKernelManager.shared_contextBool
Default:
True
Share a single zmq.Context to talk to all my kernels
- MappingKernelManager.traceback_replacement_messageUnicode
Default:
'An exception occurred at runtime, which is not shown due to ...
Message to print when allow_tracebacks is False, and an exception occurs
- KernelSpecManager.allowed_kernelspecsSet
Default:
set()
List of allowed kernel names.
By default, all installed kernels are allowed.
- KernelSpecManager.ensure_native_kernelBool
Default:
True
- If there is no Python kernelspec registered and the IPython
kernel is available, ensure it is added to the spec list.
- KernelSpecManager.kernel_spec_classType
Default:
'jupyter_client.kernelspec.KernelSpec'
- The kernel spec class. This is configurable to allow
subclassing of the KernelSpecManager for customized behavior.
- KernelSpecManager.whitelistSet
Default:
set()
Deprecated, use
KernelSpecManager.allowed_kernelspecs
- AsyncMultiKernelManager.default_kernel_nameUnicode
Default:
'python3'
The name of the default kernel to start
- AsyncMultiKernelManager.kernel_manager_classDottedObjectName
Default:
'jupyter_client.ioloop.AsyncIOLoopKernelManager'
- The kernel manager class. This is configurable to allow
subclassing of the AsyncKernelManager for customized behavior.
- AsyncMultiKernelManager.shared_contextBool
Default:
True
Share a single zmq.Context to talk to all my kernels
- AsyncMultiKernelManager.use_pending_kernelsBool
Default:
False
- Whether to make kernels available before the process has started. The
kernel has a
.ready
future which can be awaited before connecting
- AsyncMappingKernelManager.allow_tracebacksBool
Default:
True
Whether to send tracebacks to clients on exceptions.
- AsyncMappingKernelManager.allowed_message_typesList
Default:
[]
- White list of allowed kernel message types.
When the list is empty, all message types are allowed.
- AsyncMappingKernelManager.buffer_offline_messagesBool
Default:
True
Whether messages from kernels whose frontends have disconnected should be buffered in-memory.
When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.
Disable if long-running kernels will produce too much output while no frontends are connected.
- AsyncMappingKernelManager.cull_busyBool
Default:
False
- Whether to consider culling kernels which are busy.
Only effective if cull_idle_timeout > 0.
- AsyncMappingKernelManager.cull_connectedBool
Default:
False
- Whether to consider culling kernels which have one or more connections.
Only effective if cull_idle_timeout > 0.
- AsyncMappingKernelManager.cull_idle_timeoutInt
Default:
0
- Timeout (in seconds) after which a kernel is considered idle and ready to be culled.
Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.
- AsyncMappingKernelManager.cull_intervalInt
Default:
300
The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.
- AsyncMappingKernelManager.default_kernel_nameUnicode
Default:
'python3'
The name of the default kernel to start
- AsyncMappingKernelManager.kernel_info_timeoutFloat
Default:
60
Timeout for giving up on a kernel (in seconds).
On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).
- AsyncMappingKernelManager.kernel_manager_classDottedObjectName
Default:
'jupyter_client.ioloop.AsyncIOLoopKernelManager'
- The kernel manager class. This is configurable to allow
subclassing of the AsyncKernelManager for customized behavior.
- AsyncMappingKernelManager.root_dirUnicode
Default:
''
No description
- AsyncMappingKernelManager.shared_contextBool
Default:
True
Share a single zmq.Context to talk to all my kernels
- AsyncMappingKernelManager.traceback_replacement_messageUnicode
Default:
'An exception occurred at runtime, which is not shown due to ...
Message to print when allow_tracebacks is False, and an exception occurs
- AsyncMappingKernelManager.use_pending_kernelsBool
Default:
False
- Whether to make kernels available before the process has started. The
kernel has a
.ready
future which can be awaited before connecting
- ContentsManager.allow_hiddenBool
Default:
False
Allow access to hidden files
- ContentsManager.checkpointsInstance
Default:
None
No description
- ContentsManager.checkpoints_classType
Default:
'jupyter_server.services.contents.checkpoints.Checkpoints'
No description
- ContentsManager.checkpoints_kwargsDict
Default:
{}
No description
- ContentsManager.event_loggerInstance
Default:
None
No description
- ContentsManager.files_handler_classType
Default:
'jupyter_server.files.handlers.FilesHandler'
handler class to use when serving raw file requests.
Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.
Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.
Access to these files should be Authenticated.
- ContentsManager.files_handler_paramsDict
Default:
{}
Extra parameters to pass to files_handler_class.
For example, StaticFileHandlers generally expect a
path
argument specifying the root directory from which to serve files.- ContentsManager.hide_globsList
Default:
['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...
Glob patterns to hide in file and directory listings.
- ContentsManager.post_save_hookAny
Default:
None
Python callable or importstring thereof
to be called on the path of a file just saved.
This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.
It will be called as (all arguments passed by keyword):
hook(os_path=os_path, model=model, contents_manager=instance)
path: the filesystem path to the file just written
model: the model representing the file
contents_manager: this ContentsManager instance
- ContentsManager.pre_save_hookAny
Default:
None
Python callable or importstring thereof
To be called on a contents model prior to save.
This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.
It will be called as (all arguments passed by keyword):
hook(path=path, model=model, contents_manager=self)
model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.
path: the API path of the save destination
contents_manager: this ContentsManager instance
- ContentsManager.preferred_dirUnicode
Default:
''
Preferred starting directory to use for notebooks. This is an API path (
/
separated, relative to root dir)- ContentsManager.root_dirUnicode
Default:
'/'
No description
- ContentsManager.untitled_directoryUnicode
Default:
'Untitled Folder'
The base name used when creating untitled directories.
- ContentsManager.untitled_fileUnicode
Default:
'untitled'
The base name used when creating untitled files.
- ContentsManager.untitled_notebookUnicode
Default:
'Untitled'
The base name used when creating untitled notebooks.
- FileManagerMixin.hash_algorithmany of
'sha3_256'``|
’shake_128’|
’sha512’|
’sha384’|
’sha512_224’|
’shake_256’|
’sha3_512’|
’sha3_224’|
’sha256’|
’md5-sha1’|
’blake2b’|
’sha224’|
’sha3_384’|
’sm3’|
’sha1’|
’blake2s’|
’sha512_256’|
’md5’`` Default:
'sha256'
Hash algorithm to use for file content, support by hashlib
- FileManagerMixin.use_atomic_writingBool
Default:
True
- By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.
This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )
- FileContentsManager.allow_hiddenBool
Default:
False
Allow access to hidden files
- FileContentsManager.always_delete_dirBool
Default:
False
- If True, deleting a non-empty directory will always be allowed.
WARNING this may result in files being permanently removed; e.g. on Windows, if the data size is too big for the trash/recycle bin the directory will be permanently deleted. If False (default), the non-empty directory will be sent to the trash only if safe. And if
delete_to_trash
is True, the directory won’t be deleted.
- FileContentsManager.checkpointsInstance
Default:
None
No description
- FileContentsManager.checkpoints_classType
Default:
'jupyter_server.services.contents.checkpoints.Checkpoints'
No description
- FileContentsManager.checkpoints_kwargsDict
Default:
{}
No description
- FileContentsManager.delete_to_trashBool
Default:
True
- If True (default), deleting files will send them to the
platform’s trash/recycle bin, where they can be recovered. If False, deleting files really deletes them.
- FileContentsManager.event_loggerInstance
Default:
None
No description
- FileContentsManager.files_handler_classType
Default:
'jupyter_server.files.handlers.FilesHandler'
handler class to use when serving raw file requests.
Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.
Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.
Access to these files should be Authenticated.
- FileContentsManager.files_handler_paramsDict
Default:
{}
Extra parameters to pass to files_handler_class.
For example, StaticFileHandlers generally expect a
path
argument specifying the root directory from which to serve files.- FileContentsManager.hash_algorithmany of
'sha3_256'``|
’shake_128’|
’sha512’|
’sha384’|
’sha512_224’|
’shake_256’|
’sha3_512’|
’sha3_224’|
’sha256’|
’md5-sha1’|
’blake2b’|
’sha224’|
’sha3_384’|
’sm3’|
’sha1’|
’blake2s’|
’sha512_256’|
’md5’`` Default:
'sha256'
Hash algorithm to use for file content, support by hashlib
- FileContentsManager.hide_globsList
Default:
['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...
Glob patterns to hide in file and directory listings.
- FileContentsManager.max_copy_folder_size_mbInt
Default:
500
The max folder size that can be copied
- FileContentsManager.post_save_hookAny
Default:
None
Python callable or importstring thereof
to be called on the path of a file just saved.
This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.
It will be called as (all arguments passed by keyword):
hook(os_path=os_path, model=model, contents_manager=instance)
path: the filesystem path to the file just written
model: the model representing the file
contents_manager: this ContentsManager instance
- FileContentsManager.pre_save_hookAny
Default:
None
Python callable or importstring thereof
To be called on a contents model prior to save.
This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.
It will be called as (all arguments passed by keyword):
hook(path=path, model=model, contents_manager=self)
model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.
path: the API path of the save destination
contents_manager: this ContentsManager instance
- FileContentsManager.preferred_dirUnicode
Default:
''
Preferred starting directory to use for notebooks. This is an API path (
/
separated, relative to root dir)- FileContentsManager.root_dirUnicode
Default:
''
No description
- FileContentsManager.untitled_directoryUnicode
Default:
'Untitled Folder'
The base name used when creating untitled directories.
- FileContentsManager.untitled_fileUnicode
Default:
'untitled'
The base name used when creating untitled files.
- FileContentsManager.untitled_notebookUnicode
Default:
'Untitled'
The base name used when creating untitled notebooks.
- FileContentsManager.use_atomic_writingBool
Default:
True
- By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.
This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )
- AsyncContentsManager.allow_hiddenBool
Default:
False
Allow access to hidden files
- AsyncContentsManager.checkpointsInstance
Default:
None
No description
- AsyncContentsManager.checkpoints_classType
Default:
'jupyter_server.services.contents.checkpoints.AsyncCheckpoints'
No description
- AsyncContentsManager.checkpoints_kwargsDict
Default:
{}
No description
- AsyncContentsManager.event_loggerInstance
Default:
None
No description
- AsyncContentsManager.files_handler_classType
Default:
'jupyter_server.files.handlers.FilesHandler'
handler class to use when serving raw file requests.
Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.
Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.
Access to these files should be Authenticated.
- AsyncContentsManager.files_handler_paramsDict
Default:
{}
Extra parameters to pass to files_handler_class.
For example, StaticFileHandlers generally expect a
path
argument specifying the root directory from which to serve files.- AsyncContentsManager.hide_globsList
Default:
['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...
Glob patterns to hide in file and directory listings.
- AsyncContentsManager.post_save_hookAny
Default:
None
Python callable or importstring thereof
to be called on the path of a file just saved.
This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.
It will be called as (all arguments passed by keyword):
hook(os_path=os_path, model=model, contents_manager=instance)
path: the filesystem path to the file just written
model: the model representing the file
contents_manager: this ContentsManager instance
- AsyncContentsManager.pre_save_hookAny
Default:
None
Python callable or importstring thereof
To be called on a contents model prior to save.
This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.
It will be called as (all arguments passed by keyword):
hook(path=path, model=model, contents_manager=self)
model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.
path: the API path of the save destination
contents_manager: this ContentsManager instance
- AsyncContentsManager.preferred_dirUnicode
Default:
''
Preferred starting directory to use for notebooks. This is an API path (
/
separated, relative to root dir)- AsyncContentsManager.root_dirUnicode
Default:
'/'
No description
- AsyncContentsManager.untitled_directoryUnicode
Default:
'Untitled Folder'
The base name used when creating untitled directories.
- AsyncContentsManager.untitled_fileUnicode
Default:
'untitled'
The base name used when creating untitled files.
- AsyncContentsManager.untitled_notebookUnicode
Default:
'Untitled'
The base name used when creating untitled notebooks.
- AsyncFileManagerMixin.hash_algorithmany of
'sha3_256'``|
’shake_128’|
’sha512’|
’sha384’|
’sha512_224’|
’shake_256’|
’sha3_512’|
’sha3_224’|
’sha256’|
’md5-sha1’|
’blake2b’|
’sha224’|
’sha3_384’|
’sm3’|
’sha1’|
’blake2s’|
’sha512_256’|
’md5’`` Default:
'sha256'
Hash algorithm to use for file content, support by hashlib
- AsyncFileManagerMixin.use_atomic_writingBool
Default:
True
- By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.
This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )
- AsyncFileContentsManager.allow_hiddenBool
Default:
False
Allow access to hidden files
- AsyncFileContentsManager.always_delete_dirBool
Default:
False
- If True, deleting a non-empty directory will always be allowed.
WARNING this may result in files being permanently removed; e.g. on Windows, if the data size is too big for the trash/recycle bin the directory will be permanently deleted. If False (default), the non-empty directory will be sent to the trash only if safe. And if
delete_to_trash
is True, the directory won’t be deleted.
- AsyncFileContentsManager.checkpointsInstance
Default:
None
No description
- AsyncFileContentsManager.checkpoints_classType
Default:
'jupyter_server.services.contents.checkpoints.AsyncCheckpoints'
No description
- AsyncFileContentsManager.checkpoints_kwargsDict
Default:
{}
No description
- AsyncFileContentsManager.delete_to_trashBool
Default:
True
- If True (default), deleting files will send them to the
platform’s trash/recycle bin, where they can be recovered. If False, deleting files really deletes them.
- AsyncFileContentsManager.event_loggerInstance
Default:
None
No description
- AsyncFileContentsManager.files_handler_classType
Default:
'jupyter_server.files.handlers.FilesHandler'
handler class to use when serving raw file requests.
Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.
Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.
Access to these files should be Authenticated.
- AsyncFileContentsManager.files_handler_paramsDict
Default:
{}
Extra parameters to pass to files_handler_class.
For example, StaticFileHandlers generally expect a
path
argument specifying the root directory from which to serve files.- AsyncFileContentsManager.hash_algorithmany of
'sha3_256'``|
’shake_128’|
’sha512’|
’sha384’|
’sha512_224’|
’shake_256’|
’sha3_512’|
’sha3_224’|
’sha256’|
’md5-sha1’|
’blake2b’|
’sha224’|
’sha3_384’|
’sm3’|
’sha1’|
’blake2s’|
’sha512_256’|
’md5’`` Default:
'sha256'
Hash algorithm to use for file content, support by hashlib
- AsyncFileContentsManager.hide_globsList
Default:
['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...
Glob patterns to hide in file and directory listings.
- AsyncFileContentsManager.max_copy_folder_size_mbInt
Default:
500
The max folder size that can be copied
- AsyncFileContentsManager.post_save_hookAny
Default:
None
Python callable or importstring thereof
to be called on the path of a file just saved.
This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.
It will be called as (all arguments passed by keyword):
hook(os_path=os_path, model=model, contents_manager=instance)
path: the filesystem path to the file just written
model: the model representing the file
contents_manager: this ContentsManager instance
- AsyncFileContentsManager.pre_save_hookAny
Default:
None
Python callable or importstring thereof
To be called on a contents model prior to save.
This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.
It will be called as (all arguments passed by keyword):
hook(path=path, model=model, contents_manager=self)
model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.
path: the API path of the save destination
contents_manager: this ContentsManager instance
- AsyncFileContentsManager.preferred_dirUnicode
Default:
''
Preferred starting directory to use for notebooks. This is an API path (
/
separated, relative to root dir)- AsyncFileContentsManager.root_dirUnicode
Default:
''
No description
- AsyncFileContentsManager.untitled_directoryUnicode
Default:
'Untitled Folder'
The base name used when creating untitled directories.
- AsyncFileContentsManager.untitled_fileUnicode
Default:
'untitled'
The base name used when creating untitled files.
- AsyncFileContentsManager.untitled_notebookUnicode
Default:
'Untitled'
The base name used when creating untitled notebooks.
- AsyncFileContentsManager.use_atomic_writingBool
Default:
True
- By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.
This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )
- NotebookNotary.algorithmany of
'sha224'``|
’sha3_256’|
’sha3_512’|
’sha256’|
’sha512’|
’sha3_224’|
’sha3_384’|
’sha1’|
’blake2s’|
’sha384’|
’md5’|
’blake2b’`` Default:
'sha256'
The hashing algorithm used to sign notebooks.
- NotebookNotary.data_dirUnicode
Default:
''
The storage directory for notary secret and database.
- NotebookNotary.db_fileUnicode
Default:
''
- The sqlite file in which to store notebook signatures.
By default, this will be in your Jupyter data directory. You can set it to ‘:memory:’ to disable sqlite writing to the filesystem.
- NotebookNotary.secretBytes
Default:
b''
The secret key with which notebooks are signed.
- NotebookNotary.secret_fileUnicode
Default:
''
The file where the secret key is stored.
- NotebookNotary.store_factoryCallable
Default:
traitlets.Undefined
- A callable returning the storage backend for notebook signatures.
The default uses an SQLite database.
- GatewayMappingKernelManager.allow_tracebacksBool
Default:
True
Whether to send tracebacks to clients on exceptions.
- GatewayMappingKernelManager.allowed_message_typesList
Default:
[]
- White list of allowed kernel message types.
When the list is empty, all message types are allowed.
- GatewayMappingKernelManager.buffer_offline_messagesBool
Default:
True
Whether messages from kernels whose frontends have disconnected should be buffered in-memory.
When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.
Disable if long-running kernels will produce too much output while no frontends are connected.
- GatewayMappingKernelManager.cull_busyBool
Default:
False
- Whether to consider culling kernels which are busy.
Only effective if cull_idle_timeout > 0.
- GatewayMappingKernelManager.cull_connectedBool
Default:
False
- Whether to consider culling kernels which have one or more connections.
Only effective if cull_idle_timeout > 0.
- GatewayMappingKernelManager.cull_idle_timeoutInt
Default:
0
- Timeout (in seconds) after which a kernel is considered idle and ready to be culled.
Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.
- GatewayMappingKernelManager.cull_intervalInt
Default:
300
The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.
- GatewayMappingKernelManager.default_kernel_nameUnicode
Default:
'python3'
The name of the default kernel to start
- GatewayMappingKernelManager.kernel_info_timeoutFloat
Default:
60
Timeout for giving up on a kernel (in seconds).
On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).
- GatewayMappingKernelManager.kernel_manager_classDottedObjectName
Default:
'jupyter_client.ioloop.AsyncIOLoopKernelManager'
- The kernel manager class. This is configurable to allow
subclassing of the AsyncKernelManager for customized behavior.
- GatewayMappingKernelManager.root_dirUnicode
Default:
''
No description
- GatewayMappingKernelManager.shared_contextBool
Default:
True
Share a single zmq.Context to talk to all my kernels
- GatewayMappingKernelManager.traceback_replacement_messageUnicode
Default:
'An exception occurred at runtime, which is not shown due to ...
Message to print when allow_tracebacks is False, and an exception occurs
- GatewayMappingKernelManager.use_pending_kernelsBool
Default:
False
- Whether to make kernels available before the process has started. The
kernel has a
.ready
future which can be awaited before connecting
- GatewayKernelSpecManager.allowed_kernelspecsSet
Default:
set()
List of allowed kernel names.
By default, all installed kernels are allowed.
- GatewayKernelSpecManager.ensure_native_kernelBool
Default:
True
- If there is no Python kernelspec registered and the IPython
kernel is available, ensure it is added to the spec list.
- GatewayKernelSpecManager.kernel_spec_classType
Default:
'jupyter_client.kernelspec.KernelSpec'
- The kernel spec class. This is configurable to allow
subclassing of the KernelSpecManager for customized behavior.
- GatewayKernelSpecManager.whitelistSet
Default:
set()
Deprecated, use
KernelSpecManager.allowed_kernelspecs
- SessionManager.database_filepathUnicode
Default:
':memory:'
The filesystem path to SQLite Database file (e.g. /path/to/session_database.db). By default, the session database is stored in-memory (i.e.
:memory:
setting from sqlite3) and does not persist when the current Jupyter Server shuts down.- GatewaySessionManager.database_filepathUnicode
Default:
':memory:'
The filesystem path to SQLite Database file (e.g. /path/to/session_database.db). By default, the session database is stored in-memory (i.e.
:memory:
setting from sqlite3) and does not persist when the current Jupyter Server shuts down.- BaseKernelWebsocketConnection.kernel_ws_protocolUnicode
Default:
None
Preferred kernel message protocol over websocket to use (default: None). If an empty string is passed, select the legacy protocol. If None, the selected protocol will depend on what the front-end supports (usually the most recent protocol supported by the back-end and the front-end).
- BaseKernelWebsocketConnection.sessionInstance
Default:
None
No description
- GatewayWebSocketConnection.kernel_ws_protocolUnicode
Default:
''
No description
- GatewayWebSocketConnection.sessionInstance
Default:
None
No description
- GatewayClient.accept_cookiesBool
Default:
False
- Accept and manage cookies sent by the service side. This is often useful
for load balancers to decide which backend node to use. (JUPYTER_GATEWAY_ACCEPT_COOKIES env var)
- GatewayClient.allowed_envsUnicode
Default:
''
A comma-separated list of environment variable names that will be included, along with their values, in the kernel startup request. The corresponding
client_envs
configuration value must also be set on the Gateway server - since that configuration value indicates which environmental values to make available to the kernel. (JUPYTER_GATEWAY_ALLOWED_ENVS env var)- GatewayClient.auth_header_keyUnicode
Default:
''
The authorization header’s key name (typically ‘Authorization’) used in the HTTP headers. The header will be formatted as:
{'{auth_header_key}': '{auth_scheme} {auth_token}'}
If the authorization header key takes a single value,
auth_scheme
should be set to None and ‘auth_token’ should be configured to use the appropriate value.(JUPYTER_GATEWAY_AUTH_HEADER_KEY env var)
- GatewayClient.auth_schemeUnicode
Default:
''
The auth scheme, added as a prefix to the authorization token used in the HTTP headers. (JUPYTER_GATEWAY_AUTH_SCHEME env var)
- GatewayClient.auth_tokenUnicode
Default:
None
The authorization token used in the HTTP headers. The header will be formatted as:
{'{auth_header_key}': '{auth_scheme} {auth_token}'}
(JUPYTER_GATEWAY_AUTH_TOKEN env var)
- GatewayClient.ca_certsUnicode
Default:
None
The filename of CA certificates or None to use defaults. (JUPYTER_GATEWAY_CA_CERTS env var)
- GatewayClient.client_certUnicode
Default:
None
The filename for client SSL certificate, if any. (JUPYTER_GATEWAY_CLIENT_CERT env var)
- GatewayClient.client_keyUnicode
Default:
None
The filename for client SSL key, if any. (JUPYTER_GATEWAY_CLIENT_KEY env var)
- GatewayClient.connect_timeoutFloat
Default:
40.0
The time allowed for HTTP connection establishment with the Gateway server. (JUPYTER_GATEWAY_CONNECT_TIMEOUT env var)
- GatewayClient.env_whitelistUnicode
Default:
''
Deprecated, use
GatewayClient.allowed_envs
- GatewayClient.event_loggerInstance
Default:
None
No description
- GatewayClient.gateway_retry_intervalFloat
Default:
1.0
The time allowed for HTTP reconnection with the Gateway server for the first time. Next will be JUPYTER_GATEWAY_RETRY_INTERVAL multiplied by two in factor of numbers of retries but less than JUPYTER_GATEWAY_RETRY_INTERVAL_MAX. (JUPYTER_GATEWAY_RETRY_INTERVAL env var)
- GatewayClient.gateway_retry_interval_maxFloat
Default:
30.0
The maximum time allowed for HTTP reconnection retry with the Gateway server. (JUPYTER_GATEWAY_RETRY_INTERVAL_MAX env var)
- GatewayClient.gateway_retry_maxInt
Default:
5
The maximum retries allowed for HTTP reconnection with the Gateway server. (JUPYTER_GATEWAY_RETRY_MAX env var)
- GatewayClient.gateway_token_renewer_classType
Default:
'jupyter_server.gateway.gateway_client.GatewayTokenRenewerBase'
The class to use for Gateway token renewal. (JUPYTER_GATEWAY_TOKEN_RENEWER_CLASS env var)
- GatewayClient.headersUnicode
Default:
'{}'
- Additional HTTP headers to pass on the request. This value will be converted to a dict.
(JUPYTER_GATEWAY_HEADERS env var)
- GatewayClient.http_pwdUnicode
Default:
None
The password for HTTP authentication. (JUPYTER_GATEWAY_HTTP_PWD env var)
- GatewayClient.http_userUnicode
Default:
None
The username for HTTP authentication. (JUPYTER_GATEWAY_HTTP_USER env var)
- GatewayClient.kernels_endpointUnicode
Default:
'/api/kernels'
The gateway API endpoint for accessing kernel resources (JUPYTER_GATEWAY_KERNELS_ENDPOINT env var)
- GatewayClient.kernelspecs_endpointUnicode
Default:
'/api/kernelspecs'
The gateway API endpoint for accessing kernelspecs (JUPYTER_GATEWAY_KERNELSPECS_ENDPOINT env var)
- GatewayClient.kernelspecs_resource_endpointUnicode
Default:
'/kernelspecs'
The gateway endpoint for accessing kernelspecs resources (JUPYTER_GATEWAY_KERNELSPECS_RESOURCE_ENDPOINT env var)
- GatewayClient.launch_timeout_padFloat
Default:
2.0
Timeout pad to be ensured between KERNEL_LAUNCH_TIMEOUT and request_timeout such that request_timeout >= KERNEL_LAUNCH_TIMEOUT + launch_timeout_pad. (JUPYTER_GATEWAY_LAUNCH_TIMEOUT_PAD env var)
- GatewayClient.request_timeoutFloat
Default:
42.0
The time allowed for HTTP request completion. (JUPYTER_GATEWAY_REQUEST_TIMEOUT env var)
- GatewayClient.urlUnicode
Default:
None
The url of the Kernel or Enterprise Gateway server where kernel specifications are defined and kernel management takes place. If defined, this Notebook server acts as a proxy for all kernel management and kernel specification retrieval. (JUPYTER_GATEWAY_URL env var)
- GatewayClient.validate_certBool
Default:
True
For HTTPS requests, determines if server’s certificate should be validated or not. (JUPYTER_GATEWAY_VALIDATE_CERT env var)
- GatewayClient.ws_urlUnicode
Default:
None
The websocket url of the Kernel or Enterprise Gateway server. If not provided, this value will correspond to the value of the Gateway url with ‘ws’ in place of ‘http’. (JUPYTER_GATEWAY_WS_URL env var)
- EventLogger.handlersHandlers
Default:
None
A list of logging.Handler instances to send events to.
When set to None (the default), all events are discarded.
- ZMQChannelsWebsocketConnection.iopub_data_rate_limitFloat
Default:
1000000
- (bytes/sec)
Maximum rate at which stream output can be sent on iopub before they are limited.
- ZMQChannelsWebsocketConnection.iopub_msg_rate_limitFloat
Default:
1000
- (msgs/sec)
Maximum rate at which messages can be sent on iopub before they are limited.
- ZMQChannelsWebsocketConnection.kernel_ws_protocolUnicode
Default:
None
Preferred kernel message protocol over websocket to use (default: None). If an empty string is passed, select the legacy protocol. If None, the selected protocol will depend on what the front-end supports (usually the most recent protocol supported by the back-end and the front-end).
- ZMQChannelsWebsocketConnection.limit_rateBool
Default:
True
Whether to limit the rate of IOPub messages (default: True). If True, use iopub_msg_rate_limit, iopub_data_rate_limit and/or rate_limit_window to tune the rate.
- ZMQChannelsWebsocketConnection.rate_limit_windowFloat
Default:
3
- (sec) Time window used to
check the message and data rate limits.
- ZMQChannelsWebsocketConnection.sessionInstance
Default:
None
No description
Changelog#
All notable changes to this project will be documented in this file.
2.14.0#
Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Fix jupytext and lint CI failures #1413 (@blink1073)
Set all min deps #1411 (@blink1073)
chore: update pre-commit hooks #1409 (@pre-commit-ci)
Update pytest requirement from <8,>=7.0 to >=7.0,<9 #1402 (@dependabot)
Pin to Pytest 7 #1401 (@blink1073)
Documentation improvements#
Link to GitHub repo from the docs #1415 (@krassowski)
docs: list server extensions #1412 (@oliver-sanders)
Update simple extension README to cd into correct subdirectory #1410 (@markypizz)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @dependabot | @holzman | @krassowski | @markypizz | @minrk | @oliver-sanders | @pre-commit-ci | @welcome | @Zsailer
2.13.0#
Enhancements made#
Add an option to have authentication enabled for all endpoints by default #1392 (@krassowski)
websockets: add configurations for ping interval and timeout #1391 (@oliver-sanders)
Bugs fixed#
Maintenance and upkeep improvements#
Update release workflows #1399 (@blink1073)
chore: update pre-commit hooks #1390 (@pre-commit-ci)
Documentation improvements#
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @hansepac | @krassowski | @manics | @minrk | @oliver-sanders | @pre-commit-ci | @Timeroot | @welcome | @yuvipanda
2.12.5#
Maintenance and upkeep improvements#
Improve warning handling #1386 (@blink1073)
Contributors to this release#
2.12.4#
Bugs fixed#
Contributors to this release#
2.12.3#
Bugs fixed#
Import User unconditionally #1384 (@yuvipanda)
Maintenance and upkeep improvements#
Contributors to this release#
(GitHub contributors page for this release)
@mwouts | @tornaria | @welcome | @yuvipanda
2.12.2#
Bugs fixed#
Fix a typo in error message #1381 (@krassowski)
Force legacy ws subprotocol when using gateway #1311 (@epignot)
Maintenance and upkeep improvements#
Update pre-commit deps #1380 (@blink1073)
Use ruff docstring-code-format #1377 (@blink1073)
Documentation improvements#
Contributors to this release#
2.12.1#
Enhancements made#
Contributors to this release#
2.12.0#
Enhancements made#
Maintenance and upkeep improvements#
Update for tornado 6.4 #1372 (@blink1073)
chore: update pre-commit hooks #1370 (@pre-commit-ci)
Contributors to this release#
2.11.2#
Contributors to this release#
2.11.1#
Bugs fixed#
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @fcollonval | @minrk | @Wh1isper
2.11.0#
Enhancements made#
Maintenance and upkeep improvements#
Update ruff and typings #1365 (@blink1073)
Documentation improvements#
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @IITII | @welcome | @Wh1isper
2.10.1#
Bugs fixed#
Maintenance and upkeep improvements#
Clean up ruff config #1358 (@blink1073)
Add more typings #1356 (@blink1073)
chore: update pre-commit hooks #1355 (@pre-commit-ci)
Contributors to this release#
2.10.0#
Enhancements made#
Update kernel env to reflect changes in session #1354 (@blink1073)
Maintenance and upkeep improvements#
Clean up config and address warnings #1353 (@blink1073)
Clean up lint and typing #1351 (@blink1073)
Update typing for traitlets 5.13 #1350 (@blink1073)
Update typings and fix tests #1344 (@blink1073)
Contributors to this release#
2.9.1#
Bugs fixed#
Revert “Update kernel env to reflect changes in session.” #1346 (@blink1073)
Contributors to this release#
2.9.0#
Enhancements made#
Ability to configure cull_idle_timeout with kernelSpec #1342 (@akshaychitneni)
Update kernel env to reflect changes in session. #1341 (@Carreau)
Bugs fixed#
Contributors to this release#
2.8.0#
Enhancements made#
Added Logs for get_os_path closes issue #1336 (@jayeshsingh9767)
Bugs fixed#
Maintenance and upkeep improvements#
Update typings for mypy 1.6 #1337 (@blink1073)
chore: update pre-commit hooks #1334 (@pre-commit-ci)
Add typings to commonly used APIs #1333 (@blink1073)
Update typings for traitlets 5.10 #1330 (@blink1073)
Adopt sp-repo-review #1324 (@blink1073)
Bump actions/checkout from 3 to 4 #1321 (@dependabot)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @dependabot | @jayeshsingh9767 | @minrk | @pre-commit-ci | @welcome
2.7.3#
New features added#
Support external kernels #1305 (@davidbrochart)
Contributors to this release#
2.7.1#
Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Rename notebook.auth.security.passwd->jupyter_server.auth.passwd in docs #1306 (@mathbunnyru)
Update notes link #1298 (@krassowski)
docs: fix broken hyperlink to Tornado #1297 (@emmanuel-ferdman)
Contributors to this release#
(GitHub contributors page for this release)
@allstrive | @bhperry | @blink1073 | @emmanuel-ferdman | @Hind-M | @kevin-bates | @krassowski | @mathbunnyru | @matthewwiese | @minrk | @pre-commit-ci | @welcome | @wqj97 | @Zsailer
2.7.0#
Bugs fixed#
Add missing events to gateway client #1288 (@allstrive)
Maintenance and upkeep improvements#
Handle test failures #1289 (@blink1073)
Try testing against python 3.12 #1282 (@blink1073)
Documentation improvements#
Remove frontend doc #1292 (@fcollonval)
Contributors to this release#
(GitHub contributors page for this release)
@allstrive | @blink1073 | @fcollonval | @kevin-bates | @minrk | @pre-commit-ci | @welcome
2.6.0#
New features added#
Emit events from the kernels service and gateway client #1252 (@rajmusuku)
Enhancements made#
Bugs fixed#
Don’t instantiate an unused Future in gateway connection trait #1276 (@minrk)
Make the kernel_websocket_protocol flag reusable. #1264 (@ojarjur)
Register websocket handler from same module as kernel handlers #1249 (@kevin-bates)
Re-enable websocket ping/pong from the server #1243 (@Zsailer)
Fix italics in operators security sections #1242 (@kevin-bates)
Maintenance and upkeep improvements#
Fix DeprecationWarning from pytest-console-scripts #1281 (@frenzymadness)
Remove docutils and mistune pins #1278 (@blink1073)
Update docutils requirement from <0.20 to <0.21 #1277 (@dependabot)
Fix coverage handling #1257 (@blink1073)
chore: delete
.gitmodules
#1248 (@SauravMaheshkar)chore: move
babel
andeslint
configuration underpackage.json
#1246 (@SauravMaheshkar)
Documentation improvements#
Fix typo in docs #1270 (@davidbrochart)
Fix typo #1262 (@davidbrochart)
Fix italics in operators security sections #1242 (@kevin-bates)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @brichet | @codecov | @davidbrochart | @dependabot | @echarles | @frenzymadness | @hbcarlos | @kevin-bates | @lresende | @minrk | @ojarjur | @pre-commit-ci | @rajmusuku | @SauravMaheshkar | @welcome | @yuvipanda | @Zsailer
2.5.0#
Enhancements made#
Enable KernelSpecResourceHandler to be async #1236 (@Zsailer)
Added error propagation to gateway_request function #1233 (@broden-wanner)
Maintenance and upkeep improvements#
Update ruff #1230 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @broden-wanner | @codecov | @welcome | @Zsailer
2.4.0#
Enhancements made#
Bugs fixed#
Fix port selection #1229 (@blink1073)
Fix priority of deprecated NotebookApp.notebook_dir behind ServerApp.root_dir (#1223 #1223 (@minrk)
Ensure content-type properly reflects gateway kernelspec resources #1219 (@kevin-bates)
Maintenance and upkeep improvements#
fix docs build #1225 (@blink1073)
Fix ci failures #1222 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @Carreau | @codecov | @codecov-commenter | @davidbrochart | @dcsaba89 | @echarles | @kenyaachon | @kevin-bates | @minrk | @vidartf | @welcome | @Zsailer
2.3.0#
Enhancements made#
Support IPV6 in _find_http_port() #1207 (@schnell18)
Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Update jupyterhub security link #1200 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @cmd-ntrf | @codecov | @dcsaba89 | @meeseeksdev | @minrk | @pre-commit-ci | @schnell18 | @welcome
2.2.1#
Maintenance and upkeep improvements#
Delete the extra “or” in front of the second url #1194 (@jonnygrout)
Adopt more lint rules #1189 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov | @jonnygrout | @minrk | @welcome
2.2.0#
Enhancements made#
Pass in a logger to get_metadata #1176 (@yuvipanda)
Bugs fixed#
Maintenance and upkeep improvements#
Updates for client 8 #1188 (@blink1073)
Update example npm deps #1184 (@blink1073)
Fix docs and examples #1183 (@blink1073)
Update jupyter client api docs links #1179 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @Carreau | @codecov | @kevin-bates | @minrk | @ojarjur | @welcome | @yuvipanda
2.1.0#
Bugs fixed#
Maintenance and upkeep improvements#
Update typing and warning handling #1174 (@blink1073)
Documentation improvements#
Add api docs #1159 (@blink1073)
Contributors to this release#
2.0.7#
Enhancements made#
Log how long each extension module takes to import #1171 (@yuvipanda)
Set JPY_SESSION_NAME to full notebook path. #1100 (@Carreau)
Bugs fixed#
Maintenance and upkeep improvements#
Update example to use hatch #1169 (@blink1073)
Clean up docs build and typing #1168 (@blink1073)
Fix check release by ignoring duplicate file name in wheel #1163 (@blink1073)
Fix broken link in warning message #1158 (@consideRatio)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @Carreau | @codecov | @consideRatio | @meeseeksdev | @pre-commit-ci | @vidartf | @welcome | @yuvipanda
2.0.6#
Bugs fixed#
Iterate through set of apps in
extension_manager.any_activity
method #1157 (@mahendrapaipuri)
Maintenance and upkeep improvements#
Handle flake8-errmsg #1155 (@blink1073)
Add spelling and docstring enforcement #1147 (@blink1073)
Documentation improvements#
Add spelling and docstring enforcement #1147 (@blink1073)
Contributors to this release#
2.0.5#
Bugs fixed#
Remove
end
kwarg after migration from print to info #1151 (@krassowski)
Maintenance and upkeep improvements#
Contributors to this release#
2.0.4#
Bugs fixed#
Fix handling of extension last activity #1145 (@blink1073)
Contributors to this release#
2.0.3#
Bugs fixed#
Contributors to this release#
2.0.2#
Bugs fixed#
Raise errors on individual problematic extensions when listing extension #1139 (@Zsailer)
Find an available port before starting event loop #1136 (@blink1073)
only write browser files if we’re launching the browser #1133 (@hhuuggoo)
Logging message used to list sessions fails with template error #1132 (@vindex10)
Include base_url at start of kernelspec resources path #1124 (@bloomsa)
Maintenance and upkeep improvements#
Fix lint rule #1128 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @bloomsa | @codecov | @hhuuggoo | @kevin-bates | @vidartf | @vindex10 | @welcome | @Zsailer
2.0.1#
Enhancements made#
[Gateway] Remove redundant list kernels request during session poll #1112 (@kevin-bates)
Maintenance and upkeep improvements#
Update docutils requirement from <0.19 to <0.20 #1120 (@dependabot)
Adopt ruff and use less pre-commit #1114 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov | @dependabot | @kevin-bates | @ofek | @ophie200 | @welcome
2.0.0#
Enhancements made#
Introduce ServerKernelManager class #1101 (@kevin-bates)
New configurable/overridable kernel ZMQ+Websocket connection API #1047 (@Zsailer)
Pass kernel environment to
cwd_for_path
method #1046 (@divyansshhh)Better Handling of Asyncio #1035 (@blink1073)
Add authorization to AuthenticatedFileHandler #1021 (@jiajunjie)
[Gateway] Add support for gateway token renewal #985 (@kevin-bates)
Make it easier to pass custom env variables to kernel #981 (@divyansshhh)
Accept and manage cookies when requesting gateways #969 (@wjsi)
Retry certain errors between server and gateway #944 (@kevin-bates)
Allow new file types #895 (@davidbrochart)
Make it easier for extensions to customize the ServerApp #879 (@minrk)
Bugs fixed#
Fix kernel WebSocket protocol #1110 (@davidbrochart)
Defer webbrowser import #1095 (@blink1073)
Use handle_outgoing_message for ZMQ replies #1089 (@Zsailer)
Call
ports_changed
on the multi-kernel-manager instead of the kernel manager #1088 (@Zsailer)Add more websocket connection tests and fix bugs #1085 (@blink1073)
Tornado WebSocketHandler fixup #1083 (@davidbrochart)
Fix rename_file and delete_file to handle hidden files properly #1073 (@yacchin1205)
Add more coverage #1069 (@blink1073)
Increase nbconvert and checkpoints coverage #1066 (@blink1073)
Fix min version check again #1049 (@blink1073)
Fallback new file type to file for contents put #1013 (@a3626a)
Fix some typos in release instructions #1003 (@kevin-bates)
Wrap the concurrent futures in an asyncio future #1001 (@blink1073)
[Gateway] Fix and deprecate env whitelist handling #979 (@kevin-bates)
Don’t validate certs for when stopping server #959 (@Zsailer)
Parse list value for
terminado_settings
#949 (@krassowski)Fix bug in
api/contents
requests for an allowed copy #939 (@kiersten-stokes)Fix error that prevents posting to
api/contents
endpoint with no body #937 (@kiersten-stokes)Fix
get_kernel_path
forAsyncFileManager
s. #929 (@thetorpedodog)Fix c.GatewayClient.url snippet syntax #917 (@rickwierenga)
Add back support for kernel launch timeout pad #910 (@CiprianAnton)
Notify ChannelQueue that the response router thread is finishing #896 (@CiprianAnton)
Make ChannelQueue.get_msg true async #892 (@CiprianAnton)
Maintenance and upkeep improvements#
Make tests less sensitive to default kernel name #1118 (@blink1073)
Tweak codecov settings #1113 (@blink1073)
Bump minimatch from 3.0.4 to 3.1.2 #1109 (@dependabot)
Add skip-if-exists config #1108 (@blink1073)
Use pytest-jupyter #1099 (@blink1073)
Clean up release instructions and coverage handling #1098 (@blink1073)
Import ensure_async from jupyter_core #1093 (@davidbrochart)
Add more tests #1092 (@blink1073)
Fix coverage upload #1091 (@blink1073)
Add base handler tests #1090 (@blink1073)
Add more websocket connection tests and fix bugs #1085 (@blink1073)
Use base setup dependency type #1084 (@blink1073)
Add more serverapp tests #1079 (@blink1073)
Add more gateway tests #1078 (@blink1073)
More cleanup #1077 (@blink1073)
Fix hatch scripts and windows workflow run #1074 (@blink1073)
use recommended github-workflows checker #1071 (@blink1073)
Add more coverage #1069 (@blink1073)
More coverage #1067 (@blink1073)
Increase nbconvert and checkpoints coverage #1066 (@blink1073)
Test downstream jupyter_server_terminals #1065 (@blink1073)
Test notebook prerelease #1064 (@blink1073)
Bump actions/checkout from 2 to 3 #1056 (@dependabot)
Bump actions/setup-python from 2 to 4 #1055 (@dependabot)
Bump pre-commit/action from 2.0.0 to 3.0.0 #1054 (@dependabot)
Add dependabot file #1053 (@blink1073)
Use global env for min version check #1048 (@blink1073)
Clean up handling of synchronous managers #1044 (@blink1073)
Clean up config files #1031 (@blink1073)
Make node optional #1030 (@blink1073)
Use admin github token for releaser #1025 (@blink1073)
CI Cleanup #1023 (@blink1073)
Use mdformat instead of prettier #1022 (@blink1073)
Add pyproject validation #1020 (@blink1073)
Remove hardcoded client install in CI #1019 (@blink1073)
Handle client 8 pending kernels #1014 (@blink1073)
Use releaser v2 tag #1010 (@blink1073)
Use hatch environments to simplify test, coverage, and docs build #1007 (@blink1073)
Update to version2 releaser #1006 (@blink1073)
Do not use dev version yet #999 (@blink1073)
Add workflows for simplified publish #993 (@blink1073)
Remove hardcoded client install #991 (@blink1073)
Test with client 8 updates #988 (@blink1073)
Switch to using hatchling version command #984 (@blink1073)
Run downstream tests in parallel #973 (@blink1073)
Update pytest_plugin with fixtures to test auth in core and extensions #956 (@akshaychitneni)
Fix docs build #952 (@blink1073)
Fix flake8 v5 compat #941 (@blink1073)
Improve logging of bare exceptions and other cleanups. #922 (@thetorpedodog)
Use more explicit version template for pyproject #919 (@blink1073)
Fix handling of dev version #913 (@blink1073)
Fix owasp link #908 (@blink1073)
Test python 3.11 on ubuntu #839 (@blink1073)
Documentation improvements#
Remove left over from notebook #1117 (@fcollonval)
Fix wording #1037 (@fcollonval)
Fix GitHub actions badge link #1011 (@blink1073)
Pin docutils to fix docs build #1004 (@blink1073)
Update index.rst #970 (@razrotenberg)
Fix typo in IdentityProvider documentation #915 (@danielyahn)
docs: document the logging_config trait #844 (@oliver-sanders)
Deprecated features#
[Gateway] Fix and deprecate env whitelist handling #979 (@kevin-bates)
Contributors to this release#
(GitHub contributors page for this release)
@3coins | @a3626a | @akshaychitneni | @blink1073 | @bloomsa | @Carreau | @CiprianAnton | @codecov | @codecov-commenter | @danielyahn | @davidbrochart | @dependabot | @divyansshhh | @dlqqq | @echarles | @ellisonbg | @epignot | @fcollonval | @hbcarlos | @jiajunjie | @kevin-bates | @kiersten-stokes | @krassowski | @meeseeksdev | @minrk | @ofek | @oliver-sanders | @pre-commit-ci | @razrotenberg | @rickwierenga | @thetorpedodog | @vidartf | @welcome | @wjsi | @yacchin1205 | @Zsailer
2.0.0rc8#
Enhancements made#
Introduce ServerKernelManager class #1101 (@kevin-bates)
Bugs fixed#
Defer webbrowser import #1095 (@blink1073)
Maintenance and upkeep improvements#
Use pytest-jupyter #1099 (@blink1073)
Clean up release instructions and coverage handling #1098 (@blink1073)
Add more tests #1092 (@blink1073)
Fix coverage upload #1091 (@blink1073)
Add base handler tests #1090 (@blink1073)
Contributors to this release#
2.0.0rc7#
Bugs fixed#
Maintenance and upkeep improvements#
Add more websocket connection tests and fix bugs #1085 (@blink1073)
Use base setup dependency type #1084 (@blink1073)
Contributors to this release#
2.0.0rc6#
Bugs fixed#
Tornado WebSocketHandler fixup #1083 (@davidbrochart)
Maintenance and upkeep improvements#
Contributors to this release#
2.0.0rc5#
Enhancements made#
New configurable/overridable kernel ZMQ+Websocket connection API #1047 (@Zsailer)
Add authorization to AuthenticatedFileHandler #1021 (@jiajunjie)
Bugs fixed#
Fix rename_file and delete_file to handle hidden files properly #1073 (@yacchin1205)
Add more coverage #1069 (@blink1073)
Increase nbconvert and checkpoints coverage #1066 (@blink1073)
Maintenance and upkeep improvements#
Add more serverapp tests #1079 (@blink1073)
Add more gateway tests #1078 (@blink1073)
More cleanup #1077 (@blink1073)
Fix hatch scripts and windows workflow run #1074 (@blink1073)
use recommended github-workflows checker #1071 (@blink1073)
Add more coverage #1069 (@blink1073)
More coverage #1067 (@blink1073)
Increase nbconvert and checkpoints coverage #1066 (@blink1073)
Test downstream jupyter_server_terminals #1065 (@blink1073)
Test notebook prerelease #1064 (@blink1073)
Documentation improvements#
docs: document the logging_config trait #844 (@oliver-sanders)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov | @codecov-commenter | @jiajunjie | @minrk | @oliver-sanders | @pre-commit-ci | @welcome | @yacchin1205 | @Zsailer
2.0.0rc4#
Enhancements made#
Pass kernel environment to
cwd_for_path
method #1046 (@divyansshhh)Better Handling of Asyncio #1035 (@blink1073)
Bugs fixed#
Fix min version check again #1049 (@blink1073)
Maintenance and upkeep improvements#
Bump actions/checkout from 2 to 3 #1056 (@dependabot)
Bump actions/setup-python from 2 to 4 #1055 (@dependabot)
Bump pre-commit/action from 2.0.0 to 3.0.0 #1054 (@dependabot)
Add dependabot file #1053 (@blink1073)
Use global env for min version check #1048 (@blink1073)
Clean up handling of synchronous managers #1044 (@blink1073)
Documentation improvements#
Fix wording #1037 (@fcollonval)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @Carreau | @codecov-commenter | @dependabot | @divyansshhh | @fcollonval | @pre-commit-ci
2.0.0rc3#
Maintenance and upkeep improvements#
Clean up config files #1031 (@blink1073)
Make node optional #1030 (@blink1073)
Contributors to this release#
2.0.0rc2#
Bugs fixed#
Fallback new file type to file for contents put #1013 (@a3626a)
Fix some typos in release instructions #1003 (@kevin-bates)
Maintenance and upkeep improvements#
Use admin github token for releaser #1025 (@blink1073)
CI Cleanup #1023 (@blink1073)
Use mdformat instead of prettier #1022 (@blink1073)
Add pyproject validation #1020 (@blink1073)
Remove hardcoded client install in CI #1019 (@blink1073)
Handle client 8 pending kernels #1014 (@blink1073)
Use releaser v2 tag #1010 (@blink1073)
Use hatch environments to simplify test, coverage, and docs build #1007 (@blink1073)
Update to version2 releaser #1006 (@blink1073)
Documentation improvements#
Fix GitHub actions badge link #1011 (@blink1073)
Pin docutils to fix docs build #1004 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@a3626a | @blink1073 | @codecov-commenter | @kevin-bates | @pre-commit-ci | @welcome
2.0.0rc1#
Enhancements made#
[Gateway] Add support for gateway token renewal #985 (@kevin-bates)
Make it easier to pass custom env variables to kernel #981 (@divyansshhh)
Bugs fixed#
Wrap the concurrent futures in an asyncio future #1001 (@blink1073)
[Gateway] Fix and deprecate env whitelist handling #979 (@kevin-bates)
Maintenance and upkeep improvements#
Do not use dev version yet #999 (@blink1073)
Add workflows for simplified publish #993 (@blink1073)
Remove hardcoded client install #991 (@blink1073)
Test with client 8 updates #988 (@blink1073)
Switch to using hatchling version command #984 (@blink1073)
Test python 3.11 on ubuntu #839 (@blink1073)
Documentation improvements#
Deprecated features#
[Gateway] Fix and deprecate env whitelist handling #979 (@kevin-bates)
Contributors to this release#
(GitHub contributors page for this release)
@3coins | @blink1073 | @codecov-commenter | @divyansshhh | @kevin-bates | @meeseeksdev | @pre-commit-ci
2.0.0rc0#
New features added#
Enhancements made#
Accept and manage cookies when requesting gateways #969 (@wjsi)
Retry certain errors between server and gateway #944 (@kevin-bates)
Allow new file types #895 (@davidbrochart)
Make it easier for extensions to customize the ServerApp #879 (@minrk)
Show import error when failing to load an extension #878 (@minrk)
Add the root_dir value to the logging message in case of non compliant preferred_dir #804 (@echarles)
Hydrate a Kernel Manager when calling GatewayKernelManager.start_kernel with a kernel_id #788 (@Zsailer)
Remove terminals in favor of jupyter_server_terminals extension #651 (@Zsailer)
Bugs fixed#
Don’t validate certs for when stopping server #959 (@Zsailer)
Parse list value for
terminado_settings
#949 (@krassowski)Fix bug in
api/contents
requests for an allowed copy #939 (@kiersten-stokes)Fix error that prevents posting to
api/contents
endpoint with no body #937 (@kiersten-stokes)Fix
get_kernel_path
forAsyncFileManager
s. #929 (@thetorpedodog)Notify ChannelQueue that the response router thread is finishing #896 (@CiprianAnton)
Make ChannelQueue.get_msg true async #892 (@CiprianAnton)
Fix gateway kernel shutdown #874 (@kevin-bates)
Defer preferred_dir validation until root_dir is set #826 (@kevin-bates)
Maintenance and upkeep improvements#
Run downstream tests in parallel #973 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #971 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #963 (@pre-commit-ci)
Update pytest_plugin with fixtures to test auth in core and extensions #956 (@akshaychitneni)
[pre-commit.ci] pre-commit autoupdate #955 (@pre-commit-ci)
Fix docs build #952 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #945 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #942 (@pre-commit-ci)
Fix flake8 v5 compat #941 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #938 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #928 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #902 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #894 (@pre-commit-ci)
Normalize os_path #886 (@martinRenou)
[pre-commit.ci] pre-commit autoupdate #885 (@pre-commit-ci)
Fix lint #867 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #866 (@pre-commit-ci)
Fix sphinx 5.0 support #865 (@blink1073)
Add license metadata and file #827 (@blink1073)
CI cleanup #824 (@blink1073)
Switch to flit #823 (@blink1073)
Remove duplicate requests requirement from setup.cfg #813 (@mgorny)
[pre-commit.ci] pre-commit autoupdate #802 (@pre-commit-ci)
Add helper jobs for branch protection #797 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #793 (@pre-commit-ci)
Centralize app cleanup #792 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #785 (@pre-commit-ci)
Clean up pre-commit #782 (@blink1073)
Add mypy check #779 (@blink1073)
Use new post-version-spec from jupyter_releaser #777 (@blink1073)
Give write permissions to enforce label workflow #776 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #775 (@pre-commit-ci)
Add explicit handling of warnings #771 (@blink1073)
Use test-sdist from maintainer-tools #769 (@blink1073)
Add pyupgrade and doc8 hooks #768 (@blink1073)
Documentation improvements#
Fix typo in IdentityProvider documentation #915 (@danielyahn)
Add Session workflows documentation #808 (@andreyvelich)
Add Jupyter Server Architecture diagram #801 (@andreyvelich)
Fix path for full config doc #800 (@andreyvelich)
Fix contributing guide for building the docs #794 (@andreyvelich)
Update documentation about registering file save hooks #770 (@davidbrochart)
Other merged PRs#
Update index.rst #970 (@razrotenberg)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @echarles | @epignot | @krassowski | @pre-commit-ci | @razrotenberg | @welcome | @wjsi | @Zsailer
2.0.0b1#
Enhancements made#
Retry certain errors between server and gateway #944 (@kevin-bates)
Allow new file types #895 (@davidbrochart)
Bugs fixed#
Fix bug in
api/contents
requests for an allowed copy #939 (@kiersten-stokes)Fix error that prevents posting to
api/contents
endpoint with no body #937 (@kiersten-stokes)Fix
get_kernel_path
forAsyncFileManager
s. #929 (@thetorpedodog)
Maintenance and upkeep improvements#
Update pytest_plugin with fixtures to test auth in core and extensions #956 (@akshaychitneni)
[pre-commit.ci] pre-commit autoupdate #955 (@pre-commit-ci)
Fix docs build #952 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #945 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #942 (@pre-commit-ci)
Fix flake8 v5 compat #941 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #938 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #928 (@pre-commit-ci)
Documentation improvements#
Fix typo in IdentityProvider documentation #915 (@danielyahn)
Contributors to this release#
(GitHub contributors page for this release)
@akshaychitneni | @blink1073 | @codecov-commenter | @danielyahn | @davidbrochart | @dlqqq | @hbcarlos | @kevin-bates | @kiersten-stokes | @meeseeksdev | @minrk | @pre-commit-ci | @thetorpedodog | @vidartf | @welcome | @Zsailer
2.0.0b0#
Enhancements made#
Bugs fixed#
Fix c.GatewayClient.url snippet syntax #917 (@rickwierenga)
Add back support for kernel launch timeout pad #910 (@CiprianAnton)
Maintenance and upkeep improvements#
Improve logging of bare exceptions and other cleanups. #922 (@thetorpedodog)
Use more explicit version template for pyproject #919 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #916 (@pre-commit-ci)
Fix handling of dev version #913 (@blink1073)
Fix owasp link #908 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @CiprianAnton | @codecov-commenter | @dlqqq | @minrk | @pre-commit-ci | @rickwierenga | @thetorpedodog | @welcome | @Zsailer
2.0.0a2#
Enhancements made#
Bugs fixed#
Notify ChannelQueue that the response router thread is finishing #896 (@CiprianAnton)
Make ChannelQueue.get_msg true async #892 (@CiprianAnton)
Fix gateway kernel shutdown #874 (@kevin-bates)
Maintenance and upkeep improvements#
[pre-commit.ci] pre-commit autoupdate #902 (@pre-commit-ci)
[pre-commit.ci] pre-commit autoupdate #894 (@pre-commit-ci)
Normalize os_path #886 (@martinRenou)
[pre-commit.ci] pre-commit autoupdate #885 (@pre-commit-ci)
Fix lint #867 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #866 (@pre-commit-ci)
Fix sphinx 5.0 support #865 (@blink1073)
Documentation improvements#
Add changelog for 2.0.0a1 #870 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @Carreau | @CiprianAnton | @codecov-commenter | @davidbrochart | @echarles | @kevin-bates | @martinRenou | @minrk | @pre-commit-ci
2.0.0a1#
Address security advisory GHSA-q874-g24w-4q9g.
2.0.0a0#
New features added#
Enhancements made#
Bugs fixed#
Defer preferred_dir validation until root_dir is set #826 (@kevin-bates)
Maintenance and upkeep improvements#
Add license metadata and file #827 (@blink1073)
CI cleanup #824 (@blink1073)
Switch to flit #823 (@blink1073)
Remove duplicate requests requirement from setup.cfg #813 (@mgorny)
[pre-commit.ci] pre-commit autoupdate #802 (@pre-commit-ci)
Add helper jobs for branch protection #797 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #793 (@pre-commit-ci)
Centralize app cleanup #792 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #785 (@pre-commit-ci)
Clean up pre-commit #782 (@blink1073)
Add mypy check #779 (@blink1073)
Use new post-version-spec from jupyter_releaser #777 (@blink1073)
Give write permissions to enforce label workflow #776 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #775 (@pre-commit-ci)
Add explicit handling of warnings #771 (@blink1073)
Use test-sdist from maintainer-tools #769 (@blink1073)
Add pyupgrade and doc8 hooks #768 (@blink1073)
Documentation improvements#
Add Session workflows documentation #808 (@andreyvelich)
Add Jupyter Server Architecture diagram #801 (@andreyvelich)
Fix path for full config doc #800 (@andreyvelich)
Fix contributing guide for building the docs #794 (@andreyvelich)
Update documentation about registering file save hooks #770 (@davidbrochart)
Contributors to this release#
(GitHub contributors page for this release)
@andreyvelich | @blink1073 | @bollwyvl | @codecov-commenter | @davidbrochart | @echarles | @hbcarlos | @kevin-bates | @meeseeksdev | @mgorny | @minrk | @pre-commit-ci | @SylvainCorlay | @welcome | @Wh1isper | @willingc | @Zsailer
1.17.0#
Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Add helper jobs for branch protection #797 (@blink1073)
[pre-commit.ci] pre-commit autoupdate #793 (@pre-commit-ci[bot])
Update branch references and links #791 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @davidbrochart | @echarles | @kevin-bates | @meeseeksdev | @meeseeksmachine | @Wh1isper | @Zsailer
1.16.0#
New features added#
Enhancements made#
Add
max-age
Cache-Control header to kernel logos #760 (@divyansshhh)
Bugs fixed#
Regression in connection URL calculation in ServerApp #761 (@jhamet93)
Include explicit package data #757 (@blink1073)
Ensure terminal cwd exists #755 (@fcollonval)
make ‘cwd’ param for TerminalManager absolute #749 (@rccern)
wait to cleanup kernels after kernel is finished pending #748 (@Zsailer)
Maintenance and upkeep improvements#
Skip jsonschema in CI #766 (@blink1073)
Remove redundant job and problematic check #765 (@blink1073)
Update pre-commit #764 (@blink1073)
Install pre-commit automatically #763 (@blink1073)
Add pytest opts and use isort #762 (@blink1073)
Ensure minimal nbconvert support jinja2 v2 & v3 #756 (@fcollonval)
Fix error handler in simple extension examples #750 (@andreyvelich)
Clean up workflows #747 (@blink1073)
Remove Redundant Dir_Exists Invocation When Creating New Files with ContentsManager #720 (@jhamet93)
Other merged PRs#
Contributors to this release#
(GitHub contributors page for this release)
@andreyvelich | @blink1073 | @codecov-commenter | @divyansshhh | @dleen | @fcollonval | @jhamet93 | @meeseeksdev | @minrk | @rccern | @welcome | @Zsailer
1.15.6#
Bugs fixed#
Maintenance and upkeep improvements#
More CI Cleanup #742 (@blink1073)
Clean up downstream tests #741 (@blink1073)
Contributors to this release#
1.15.5#
Bugs fixed#
Maintenance and upkeep improvements#
Fix sdist test #736 (@blink1073)
Contributors to this release#
1.15.3#
Bugs fixed#
Fix server-extension paths (3rd time’s the charm) #734 (@minrk)
Revert “Server extension paths (#730)” #732 (@blink1073)
Maintenance and upkeep improvements#
Avoid usage of ipython_genutils #718 (@blink1073)
Contributors to this release#
1.15.2#
Bugs fixed#
Maintenance and upkeep improvements#
Skip nbclassic downstream tests for now #725 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @minrk | @Zsailer
1.15.1#
Bugs fixed#
Revert “Reuse ServerApp.config_file_paths for consistency (#715)” #728 (@blink1073)
Contributors to this release#
1.15.0#
New features added#
Enhancements made#
Validate notebooks once per fetch or save #724 (@kevin-bates)
Register pre/post save hooks, call them sequentially #696 (@davidbrochart)
Bugs fixed#
Call pre_save_hook only on first chunk of large files #716 (@davidbrochart)
Reuse ServerApp.config_file_paths for consistency #715 (@minrk)
serverapp: Use .absolute() instead of .resolve() for symlinks #712 (@EricCousineau-TRI)
Fall back to legacy protocol if selected_subprotocol raises exception #706 (@davidbrochart)
Maintenance and upkeep improvements#
Clean up CI #723 (@blink1073)
Clean up activity recording #722 (@blink1073)
Clean up Dependency Handling #707 (@blink1073)
Add Minimum Requirements Test #704 (@blink1073)
Clean up handling of tests #700 (@blink1073)
Refresh precommit #698 (@blink1073)
Use pytest-github-actions-annotate-failures #694 (@blink1073)
Documentation improvements#
Add WebSocket wire protocol documentation #693 (@davidbrochart)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @davidbrochart | @echarles | @EricCousineau-TRI | @jhamet93 | @kevin-bates | @minrk | @vidartf | @welcome | @Wh1isper | @Zsailer
1.13.5#
Enhancements made#
Protocol alignment #657 (@davidbrochart)
Bugs fixed#
Fix to remove potential memory leak on Jupyter Server ZMQChannelHandler code #682 (@Vishwajeet0510)
Pin pywintpy for now #681 (@blink1073)
Fix the non-writable path deletion error #670 (@vkaidalov)
make unit tests backwards compatible without pending kernels #669 (@Zsailer)
Maintenance and upkeep improvements#
Clean up full install test #689 (@blink1073)
Update trigger_precommit.yml #687 (@blink1073)
Add Auto Pre-Commit #685 (@blink1073)
Fix a typo #683 (@krassowski)
(temporarily) skip pending kernels unit tests on Windows CI #673 (@Zsailer)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @davidbrochart | @echarles | @github-actions | @jasongrout | @krassowski | @maartenbreddels | @SylvainCorlay | @Vishwajeet0510 | @vkaidalov | @welcome | @Wh1isper | @Zsailer
1.13.4#
Bugs fixed#
Fix nbconvert handler run_sync() #667 (@davidbrochart)
Contributors to this release#
1.13.3#
Enhancements made#
Bugs fixed#
Contributors to this release#
1.13.2#
Enhancements made#
Bugs fixed#
Run pre_save_hook before model check #643 (@davidbrochart)
Maintenance and upkeep improvements#
Clean up deprecations #650 (@blink1073)
Update branch references #646 (@blink1073)
pyproject.toml: clarify build system version #634 (@adamjstewart)
Contributors to this release#
(GitHub contributors page for this release)
@adamjstewart | @blink1073 | @ccw630 | @codecov-commenter | @davidbrochart | @echarles | @fcollonval | @kevin-bates | @op3 | @welcome | @Wh1isper | @Zsailer
1.13.1#
Bugs fixed#
Maintenance and upkeep improvements#
Fix macos pypy check #632 (@blink1073)
Contributors to this release#
1.13.0#
Enhancements made#
Bugs fixed#
Nudge on the control channel instead of the shell #628 (@JohanMabille)
Maintenance and upkeep improvements#
Clean up downstream tests #629 (@blink1073)
Clean up version info handling #620 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @echarles | @JohanMabille | @jtpio | @Zsailer
1.12.1#
Bugs fixed#
Maintenance and upkeep improvements#
Use
maintainer-tools
base setup action #616 (@blink1073)
Contributors to this release#
1.12.0#
Enhancements made#
Use pending kernels #593 (@blink1073)
Bugs fixed#
Maintenance and upkeep improvements#
Enforce labels on PRs #613 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @havok2063 | @minrk | @mwakaba2 | @toonn | @welcome | @Zsailer
1.11.2#
Bugs fixed#
Maintenance and upkeep improvements#
Avoid dependency on NBConvert versions for REST API test #601 (@Zsailer)
Bump ansi-regex from 5.0.0 to 5.0.1 #590 (@dependabot)
Contributors to this release#
(GitHub contributors page for this release)
@codecov-commenter | @dependabot | @kevin-bates | @stdll00 | @welcome | @Wh1isper | @Zsailer
1.11.1#
Bugs fixed#
Do not log connection error if the kernel is already shutdown #584 (@martinRenou)
[BUG]: allow None for min_open_files_limit trait #587 (@Zsailer)
Contributors to this release#
1.11.0#
Enhancements made#
Allow non-empty directory deletion through settings #574 (@fcollonval)
Bugs fixed#
pytest_plugin: allow user specified headers in jp_ws_fetch #580 (@oliver-sanders)
Shutdown kernels/terminals on api/shutdown #579 (@martinRenou)
pytest: package conftest #576 (@oliver-sanders)
Set stacklevel on warning to point to the right place. #572 (@Carreau)
Maintenance and upkeep improvements#
Fix jupyter_client warning #581 (@martinRenou)
Add Pre-Commit Config #575 (@fcollonval)
Clean up link checking #569 (@blink1073)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @Carreau | @codecov-commenter | @fcollonval | @martinRenou | @oliver-sanders | @vidartf
1.10.2#
Bugs fixed#
fix: make command line aliases work again #564 (@mariobuikhuizen)
decode bytes from secure cookie #562 (@oliver-sanders)
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#
(GitHub contributors page for this release)
@afshin | @codecov-commenter | @echarles | @manics | @mariobuikhuizen | @oliver-sanders | @welcome | @Zsailer
1.10.1#
Bugs fixed#
Protect against unset spec #556 (@fcollonval)
Contributors to this release#
1.10.0#
Enhancements made#
stop hook for extensions #526 (@oliver-sanders)
extensions: allow extensions in namespace packages #523 (@oliver-sanders)
Bugs fixed#
Fix examples/simple test execution #552 (@davidbrochart)
Rebuild package-lock, fixing local setup #548 (@martinRenou)
Maintenance and upkeep improvements#
small test changes #541 (@oliver-sanders)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @davidbrochart | @goanpeca | @kevin-bates | @martinRenou | @oliver-sanders | @welcome | @Zsailer
1.9.0#
Enhancements made#
enable a way to run a task when an io_loop is created #531 (@eastonsuo)
adds
GatewayClient.auth_scheme
configurable #529 (@telamonian)[Notebook port 4835] Add UNIX socket support to notebook server #525 (@jtpio)
Bugs fixed#
Fix nbconvert handler #545 (@davidbrochart)
Maintenance and upkeep improvements#
Test Downstream Packages #528 (@blink1073)
fix jp_ws_fetch not work by its own #441 #527 (@eastonsuo)
Documentation improvements#
Update link to meeting notes #535 (@krassowski)
Contributors to this release#
(GitHub contributors page for this release)
@blink1073 | @codecov-commenter | @davidbrochart | @eastonsuo | @icankeep | @jtpio | @kevin-bates | @krassowski | @telamonian | @vidartf | @welcome | @Zsailer
1.8.0#
Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#
(GitHub contributors page for this release)
@codecov-commenter | @jtpio | @minrk | @mwakaba2 | @vidartf | @welcome | @Zsailer
1.7.0#
Bugs fixed#
Fix for recursive symlink - (port Notebook 4670) #497 (@kevin-bates)
Enhancements made#
Refactor gateway kernel management to achieve a degree of consistency #483 (@kevin-bates)
Maintenance and upkeep improvements#
Use kernel_id for new kernel if it doesn’t exist in MappingKernelManager.start_kernel #511 (@the-higgs)
Include backtrace in debug output when extension fails to load #506 (@candlerb)
ExtensionPoint: return True on successful validate() #503 (@minrk)
ExtensionManager: load default config manager by default #502 (@minrk)
Drop dependency on pywin32 #514 (@kevin-bates)
Add Appropriate Token Permission for CodeQL Workflow #489 (@afshin)
Documentation improvements#
Contributors to this release#
(GitHub contributors page for this release)
@codecov-commenter | @hMED22 | @jtpio | @kevin-bates | @the-higgs | @welcome @blink1073 | @candlerb | @kevin-bates | @minrk | @mwakaba2 | @Zsailer | @kiendang | [@Carreau] (https://github.com/search?q=repo%3Ajupyter-server%2Fjupyter_server+involves%3ACarreau+updated%3A2021-04-21..2021-05-01&type=Issues)
1.6.4#
Bugs fixed#
Contributors to this release#
1.6.3#
Merges#
Gate anyio version. 2b51ee3
Fix activity tracking and nudge issues when kernel ports change on restarts #482 (@kevin-bates)
Contributors to this release#
1.6.2#
Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#
1.6.1#
Merged PRs#
Contributors to this release#
(GitHub contributors page for this release)
@codecov-io | @davidbrochart | @echarles | @faucct | @jtpio | @welcome
1.6.0#
New features added#
Enhancements made#
Maintenance and upkeep improvements#
Documentation improvements#
Other merged PRs#
Contributors to this release#
(GitHub contributors page for this release)
@afshin | @codecov-io | @echarles | @jasongrout | @jtpio | @kevin-bates | @vidartf
1.5.1#
Merged pull requests:
Contributors to this release:
1.5.0#
Merged pull requests:
Escape user input in handlers flagged during code scans #449 (@kevin-bates)
Update CI badge and fix broken link #443 (@blink1073)
Port terminal culling from Notebook #438 (@kevin-bates)
More complex handling of
open_browser
from extension applications #433 (@afshin)
Contributors to this release:
(GitHub contributors page for this release)
@afshin | @blink1073 | @codecov-io | @jtpio | @kevin-bates | @kiendang | @minrk | @sngyo | @Zsailer
1.4.1 (2021-02-22)#
Merged pull requests:
Update README.md #425 (@BobinMathew)
Solve UnboundLocalError in launch_browser() #421 (@jamesmishra)
Remove outdated reference to _jupyter_server_extension_paths in docs #419 (@Zsailer)
Contributors to this release:
1.4.0 (2021-02-18)#
Merged pull requests:
Remove obsoleted asyncio-patch fixture #412 (kevin-bates)
Emit deprecation warning on old name #411 (fcollonval)
Correct logging message position #410 (fcollonval)
Update 1.3.0 Changelog to include broken 1.2.3 PRs #408 (kevin-bates)
[Gateway] Track only this server’s kernels #407 (kevin-bates)
Update manager.py: more descriptive warnings when extensions fail to load #396 (alberti42)
1.3.0 (2021-02-04)#
Merged pull requests (includes those from broken 1.2.3 release):
Special case ExtensionApp that starts the ServerApp #401 (afshin)
only use deprecated notebook_dir config if root_dir is not set #400 (minrk)
Use async kernel manager by default #399 (kevin-bates)
Revert Session.username default value change #398 (mwakaba2)
Enable notebook ContentsManager in jupyter_server #392 (afshin)
Use jupyter_server_config.json as config file in the update password api #390 (echarles)
Increase culling test idle timeout #388 (kevin-bates)
1.2.3 (2021-01-29)#
This was a broken release and was yanked from PyPI.
Merged pull requests:
1.2.2 (2021-01-14)#
Merged pull requests:
Apply missing ensure_async to root session handler methods #386 (kevin-bates)
Replace secure_write, is_hidden, exists with jupyter_core’s #382 (kevin-bates)
1.2.1 (2021-01-08)#
Merged pull requests:
1.2.0 (2021-01-07)#
Merged pull requests:
1.1.4 (2021-01-04)#
Merged pull requests:
Update the link to paths documentation #371 (krassowski)
IPythonHandler -> JupyterHandler #370 (krassowski)
use setuptools find_packages, exclude tests, docs and examples from dist #368 (bollwyvl)
Update serverapp.py #367 (michaelaye)
1.1.3 (2020-12-23)#
Merged pull requests:
1.1.2 (2020-12-21)#
Merged pull requests:
Nudge kernel with info request until we receive IOPub messages #361 (SylvainCorlay)
1.1.1 (2020-12-16)#
Merged pull requests:
1.1.0 (2020-12-11)#
Merged pull requests:
Restore pytest plugin from pytest-jupyter #360 (kevin-bates)
Fix upgrade packaging dependencies build step #354 (mwakaba2)
Await _connect and inline read_messages callback to _connect #350 (ricklamers)
Update release instructions and dev version #348 (kevin-bates)
Fix test_trailing_slash #346 (kevin-bates)
Apply security advisory fix to master #345 (kevin-bates)
Port Notebook PRs 5565 and 5588 - terminal shell heuristics #343 (kevin-bates)
Port gateway updates from notebook (PRs 5317 and 5484) #341 (kevin-bates)
add check_origin handler to gateway WebSocketChannelsHandler #340 (ricklamers)
Remove pytest11 entrypoint and plugin, require tornado 6.1, remove asyncio patch, CI work #339 (bollwyvl)
Switch fixtures to use those in pytest-jupyter to avoid collisions #335 (kevin-bates)
Enable CodeQL runs on all pushed branches #333 (kevin-bates)
1.0.6 (2020-11-18)#
1.0.6 is a security release, fixing one vulnerability:
Changed#
Fix open redirect vulnerability GHSA-grfj-wjv9-4f9v (CVE-2020-26232)
1.0 (2020-9-18)#
Added.#
Changed.#
load_jupyter_server_extension
should be renamed to_load_jupyter_server_extension
in server extensions. Server now throws a warning when the old name is used. (213)Docs for server extensions now recommend using
authenticated
decorator for handlers. (219)_load_jupyter_server_paths
should be renamed to_load_jupyter_server_points
in server extensions. (277)static_url_prefix
in ExtensionApps is now a configurable trait. (289)extension_name
trait was removed in favor ofname
. (232)Dropped support for Python 3.5. (296)
Made the
config_dir_name
trait configurable inConfigManager
. (297)
Removed for now removed features.#
Removed ipykernel as a dependency of jupyter_server. (255)
Fixed for any bug fixes.#
Prevent a re-definition of prometheus metrics if
notebook
package already imports them. (#210)Fixed
terminals
REST API unit tests that weren’t shutting down properly. (221)Fixed jupyter_server on Windows for Python < 3.7. Added patch to handle subprocess cleanup. (240)
base_url
was being duplicated when getting a url path from theServerApp
. (280)Extension URLs are now properly prefixed with
base_url
. Previously, allstatic
paths were not. (285)Changed ExtensionApp mixin to inherit from
HasTraits
. This broke in traitlets 5.0 (294)Replaces
urlparse
withurl_path_join
to prevent URL squashing issues. (304)
[0.3] - 2020-4-22#
Added#
Changed#
Removed#
(#194) The bundlerextension entry point was removed.
[0.2.1] - 2020-1-10#
Added#
pytest-plugin for Jupyter Server.
Allows one to write async/await syntax in tests functions.
Some particularly useful fixtures include:
serverapp
: a default ServerApp instance that handles setup+teardown.configurable_serverapp
: a function that returns a ServerApp instance.fetch
: an awaitable function that tests makes requests to the server APIcreate_notebook
: a function that writes a notebook to a given temporary file path.
[0.2.0] - 2019-12-19#
Added#
extension
submodule (#48)ExtensionApp - configurable JupyterApp-subclass for server extensions
Most useful for Jupyter frontends, like Notebook, JupyterLab, nteract, voila etc.
Launch with entrypoints
Configure from file or CLI
Add custom templates, static assets, handlers, etc.
Static assets are served behind a
/static/<extension_name>
endpoint.
ExtensionHandler - tornado handlers for extensions.
Finds static assets at
/static/<extension_name>
Changed#
jupyter serverextension <command>
entrypoint has been changed tojupyter server extension <command>
.toggle_jupyter_server
andvalidate_jupyter_server
function no longer take a Logger object as an argument.Changed testing framework from nosetests to pytest (#152)
Depend on pytest-tornasync extension for handling tornado/asyncio eventloop
Depend on pytest-console-scripts for testing CLI entrypoints
Added Github actions as a testing framework along side Travis and Azure (#146)
Removed#
Removed the option to update
root_dir
trait in FileContentsManager and MappingKernelManager in ServerApp (#135)
Fixed#
Security#
Added a “secure_write to function for cookie/token saves (#77)