Welcome!#

You’ve landed on the documentation pages for the Jupyter Server Project. Some other pages you may have been looking for:

Introduction#

Jupyter Server is the backend that provides the core services, APIs, and REST endpoints for Jupyter web applications.

Note

Jupyter Server is a replacement for the Tornado Web Server in Jupyter Notebook. Jupyter web applications should move to using Jupyter Server. For help, see the Migrating from Notebook Server page.

Applications#

Jupyter Server extensions can use the framework and services provided by Jupyter Server to create applications and services.

Examples of Jupyter Server extensions include:

Jupyter Lab

JupyterLab computational environment.

Jupyter Resource Usage

Jupyter Notebook Extension for monitoring your own resource usage.

Jupyter Scheduler

Run Jupyter notebooks as jobs.

jupyter-collaboration

A Jupyter Server Extension Providing Support for Y Documents.

NbClassic

Jupyter notebook as a Jupyter Server extension.

Cylc UI Server

A Jupyter Server extension that serves the cylc-ui web application for monitoring and controlling Cylc workflows.

For more information on extensions, see Server Extensions.

Who’s this for?#

The Jupyter Server is a highly technical piece of the Jupyter Stack, so we’ve separated documentation to help specific personas:

  1. Users: people using Jupyter web applications.

  2. Operators: people deploying or serving Jupyter web applications to others.

  3. Developers: people writing Jupyter Server extensions and web applications.

  4. Contributors: people contributing directly to the Jupyter Server library.

If you finds gaps in our documentation, please open an issue (or better, a pull request) on the Jupyter Server Github repo.

Table of Contents#

Documentation for Users#

The Jupyter Server is a highly technical piece of the Jupyter Stack, so users probably won’t import or install this library directly. These pages are to meant to help you in case you run into issues or bugs.

Installation#

Most Jupyter users will never need to install Jupyter Server manually. Jupyter Web applications will include the (correct version) of Jupyter Server as a dependency. It’s best to let those applications handle installation, because they may require a specific version of Jupyter Server.

If you decide to install manually, run:

pip install jupyter_server

You upgrade or downgrade to a specific version of Jupyter Server by adding an operator to the command above:

pip install jupyter_server==1.0

Configuring a Jupyter Server#

Using a Jupyter config file#

By default, Jupyter Server looks for server-specific configuration in a jupyter_server_config file located on a Jupyter path. To list the paths where Jupyter Server will look, run:

$ jupyter --paths

config:
    /Users/username/.jupyter
    /usr/local/etc/jupyter
    /etc/jupyter
data:
    /Users/username/Library/Jupyter
    /usr/local/share/jupyter
    /usr/share/jupyter
runtime:
    /Users/username/Library/Jupyter/runtime

The paths under config are listed in order of precedence. If the same trait is listed in multiple places, it will be set to the value from the file with the highest precedence.

Jupyter Server uses IPython’s traitlets system for configuration. Traits can be listed in a Python or JSON config file. You can quickly create a jupyter_server_config.py file in the .jupyter directory, with all the defaults commented out, use the following command:

$ jupyter server --generate-config

In Python files, these traits will have the prefix c.ServerApp. For example, your configuration file could look like:

# inside a jupyter_server_config.py file.

c.ServerApp.port = 9999

The same configuration in JSON, looks like:

{
    "ServerApp": {
        "port": 9999
    }
}
Using the CLI#

Alternatively, you can configure Jupyter Server when launching from the command line using CLI args. Prefix each argument with --ServerApp like so:

$ jupyter server --ServerApp.port=9999
Full configuration list#

See the full list of configuration options for the server here.

Launching a bare Jupyter Server#

Most of the time, you won’t need to start the Jupyter Server directly. Jupyter Web Applications (like Jupyter Notebook, Jupyterlab, Voila, etc.) come with their own entry points that start a server automatically.

Sometimes, though, it can be useful to start Jupyter Server directly when you want to run multiple Jupyter Web applications at the same time. For more details, see the Managing multiple extensions page. If these extensions are enabled, you can simple run the following:

> jupyter server

[I 2020-03-20 15:48:20.903 ServerApp] Serving notebooks from local directory: /Users/username/home
[I 2020-03-20 15:48:20.903 ServerApp] Jupyter Server 1.0.0 is running at:
[I 2020-03-20 15:48:20.903 ServerApp] http://localhost:8888/?token=<...>
[I 2020-03-20 15:48:20.903 ServerApp]  or http://127.0.0.1:8888/?token=<...>
[I 2020-03-20 15:48:20.903 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 2020-03-20 15:48:20.903 ServerApp] Welcome to Project Jupyter! Explore the various tools available and their corresponding documentation. If you are interested in contributing to the platform, please visit the communityresources section at https://jupyter.org/community.html.
[C 2020-03-20 15:48:20.907 ServerApp]

    To access the server, open this file in a browser:
        file:///Users/username/jpserver-###-open.html
    Or copy and paste one of these URLs:
        http://localhost:8888/?token=<...>
    or http://127.0.0.1:8888/?token=<...>

Getting Help#

If you run into any issues or bugs, please open an issue on Github.

We’d also love to have you come by our Team Meetings.

Documentation for Operators#

These pages are targeted at people using, configuring, and/or deploying multiple Jupyter Web Application with Jupyter Server.

Managing multiple extensions#

One of the major benefits of Jupyter Server is that you can run serve multiple Jupyter frontend applications above the same Tornado web server. That’s because every Jupyter frontend application is now a server extension. When you run a Jupyter Server with multiple extensions enabled, each extension appends its own set of handlers and static assets to the server.

Listing extensions#

When you install a Jupyter Server extension, it should automatically add itself to your list of enabled extensions. You can see a list of installed extensions by calling:

> jupyter server extension list

config dir: /Users/username/etc/jupyter
    myextension enabled
    - Validating myextension...
      myextension  OK
Enabling/disabling extensions#

You enable/disable an extension using the following commands:

> jupyter server extension enable myextension

Enabling: myextension
    - Validating myextension...
      myextension  OK
    - Extension successfully enabled.


> jupyter server extension disable myextension

Disabling: jupyter_home
    - Validating jupyter_home...
      jupyter_home  OK
    - Extension successfully disabled.
Running an extensions from its entrypoint#

Extensions that are also Jupyter applications (i.e. Notebook, JupyterLab, Voila, etc.) can be launched from a CLI entrypoint. For example, launch Jupyter Notebook using:

> jupyter notebook

Jupyter Server will automatically start a server and the browser will be routed to Jupyter Notebook’s default URL (typically, /tree).

Other enabled extension will still be available to the user. The entrypoint simply offers a more direct (backwards compatible) launching mechanism.

Launching a server with multiple extensions#

If multiple extensions are enabled, a Jupyter Server can be launched directly:

> jupyter server

[I 2020-03-23 15:44:53.290 ServerApp] Serving notebooks from local directory: /Users/username/path
[I 2020-03-23 15:44:53.290 ServerApp] Jupyter Server 0.3.0.dev is running at:
[I 2020-03-23 15:44:53.290 ServerApp] http://localhost:8888/?token=<...>
[I 2020-03-23 15:44:53.290 ServerApp]  or http://127.0.0.1:8888/?token=<...>
[I 2020-03-23 15:44:53.290 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 2020-03-23 15:44:53.290 ServerApp] Welcome to Project Jupyter! Explore the various tools available and their corresponding documentation. If you are interested in contributing to the platform, please visit the communityresources section at https://jupyter.org/community.html.
[C 2020-03-23 15:44:53.296 ServerApp]

    To access the server, open this file in a browser:
        file:///Users/username/pathjpserver-####-open.html
    Or copy and paste one of these URLs:
        http://localhost:8888/?token=<...>
    or http://127.0.0.1:8888/?token=<...>

Extensions can also be enabled manually from the Jupyter Server entrypoint using the jpserver_extensions trait:

> jupyter server --ServerApp.jpserver_extensions="myextension=True"

Configuring Extensions#

Some Jupyter Server extensions are also configurable applications. There are two ways to configure such extensions: i) pass arguments to the extension’s entry point or ii) list configurable options in a Jupyter config file.

Jupyter Server looks for an extension’s config file in a set of specific paths. Use the jupyter entry point to list these paths:

> jupyter --paths

config:
    /Users/username/.jupyter
    /usr/local/etc/jupyter
    /etc/jupyter
data:
    /Users/username/Library/Jupyter
    /usr/local/share/jupyter
    /usr/share/jupyter
runtime:
    /Users/username/Library/Jupyter/runtime
Extension config from file#

Jupyter Server expects the file to be named after the extension’s name like so: jupyter_{name}_config. For example, the Jupyter Notebook’s config file is jupyter_notebook_config.

Configuration files can be Python or JSON files.

In Python config files, each trait will be prefixed with c. that links the trait to the config loader. For example, Jupyter Notebook config might look like:

# jupyter_notebook_config.py

c.NotebookApp.mathjax_enabled = False

A Jupyter Server will automatically load config for each enabled extension. You can configure each extension by creating their corresponding Jupyter config file.

Extension config on the command line#

Server extension applications can also be configured from the command line, and multiple extension can be configured at the same time. Simply pass the traits (with their appropriate prefix) to the jupyter server entrypoint, e.g.:

> jupyter server --ServerApp.port=9999 --MyExtension1.trait=False --MyExtension2.trait=True

This will also work with any extension entrypoints that allow other extensions to run side-by-side, e.g.:

> jupyter myextension --ServerApp.port=9999 --MyExtension1.trait=False --MyExtension2.trait=True

Migrating from Notebook Server#

To migrate from notebook server to plain jupyter server, follow these steps:

  • Rename your jupyter_notebook_config.py file to jupyter_server_config.py.

  • Rename all c.NotebookApp traits to c.ServerApp.

For example if you have the following jupyter_notebook_config.py.

c.NotebookApp.allow_credentials = False
c.NotebookApp.port = 8889
c.NotebookApp.password_required = True

You will have to create the following jupyter_server_config.py file.

c.ServerApp.allow_credentials = False
c.ServerApp.port = 8889
c.ServerApp.password_required = True

Running Jupyter Notebook on Jupyter Server#

If you want to switch to Jupyter Server, but you still want to serve Jupyter Notebook to users, you can try NBClassic.

NBClassic is a Jupyter Server extension that serves the Notebook frontend (i.e. all static assets) on top of Jupyter Server. It even loads Jupyter Notebook’s config files.

Warning

NBClassic will only work for a limited time. Jupyter Server is likely to evolve beyond a point where Jupyter Notebook frontend will no longer work with the underlying server. Consider switching to JupyterLab or nteract where there is active development happening.

Running a public Jupyter Server#

The Jupyter Server uses a two-process kernel architecture based on ZeroMQ, as well as Tornado for serving HTTP requests.

Note

By default, Jupyter Server runs locally at 127.0.0.1:8888 and is accessible only from localhost. You may access the server from the browser using http://127.0.0.1:8888.

This document describes how you can secure a Jupyter server and how to run it on a public interface.

Important

This is not the multi-user server you are looking for. This document describes how you can run a public server with a single user. This should only be done by someone who wants remote access to their personal machine. Even so, doing this requires a thorough understanding of the set-ups limitations and security implications. If you allow multiple users to access a Jupyter server as it is described in this document, their commands may collide, clobber and overwrite each other.

If you want a multi-user server, the official solution is JupyterHub. To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your users on a network. This may run over the public internet, but doing so introduces additional security concerns.

Securing a Jupyter server#

You can protect your Jupyter server with a simple single password. As of notebook 5.0 this can be done automatically. To set up a password manually you can configure the ServerApp.password setting in jupyter_server_config.py.

Prerequisite: A Jupyter server configuration file#

Check to see if you have a Jupyter server configuration file, jupyter_server_config.py. The default location for this file is your Jupyter folder located in your home directory:

  • Windows: C:\Users\USERNAME\.jupyter\jupyter_server_config.py

  • OS X: /Users/USERNAME/.jupyter/jupyter_server_config.py

  • Linux: /home/USERNAME/.jupyter/jupyter_server_config.py

If you don’t already have a Jupyter folder, or if your Jupyter folder doesn’t contain a Jupyter server configuration file, run the following command:

$ jupyter server --generate-config

This command will create the Jupyter folder if necessary, and create a Jupyter server configuration file, jupyter_server_config.py, in this folder.

Automatic Password setup#

As of notebook 5.3, the first time you log-in using a token, the server should give you the opportunity to setup a password from the user interface.

You will be presented with a form asking for the current token, as well as your new password; enter both and click on Login and setup new password.

Next time you need to log in you’ll be able to use the new password instead of the login token, otherwise follow the procedure to set a password from the command line.

The ability to change the password at first login time may be disabled by integrations by setting the --ServerApp.allow_password_change=False

Starting at notebook version 5.0, you can enter and store a password for your server with a single command. jupyter server password will prompt you for your password and record the hashed password in your jupyter_server_config.json.

$ jupyter server password
Enter password:  ****
Verify password: ****
[JupyterPasswordApp] Wrote hashed password to /Users/you/.jupyter/jupyter_server_config.json

This can be used to reset a lost password; or if you believe your credentials have been leaked and desire to change your password. Changing your password will invalidate all logged-in sessions after a server restart.

Preparing a hashed password#

You can prepare a hashed password manually, using the function jupyter_server.auth.passwd():

>>> from jupyter_server.auth import passwd
>>> passwd()
Enter password:
Verify password:
'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'

Caution

passwd() when called with no arguments will prompt you to enter and verify your password such as in the above code snippet. Although the function can also be passed a string as an argument such as passwd('mypassword'), please do not pass a string as an argument inside an IPython session, as it will be saved in your input history.

Adding hashed password to your notebook configuration file#

You can then add the hashed password to your jupyter_server_config.py. The default location for this file jupyter_server_config.py is in your Jupyter folder in your home directory, ~/.jupyter, e.g.:

c.ServerApp.password = u'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'

Automatic password setup will store the hash in jupyter_server_config.json while this method stores the hash in jupyter_server_config.py. The .json configuration options take precedence over the .py one, thus the manual password may not take effect if the Json file has a password set.

Using SSL for encrypted communication#

When using a password, it is a good idea to also use SSL with a web certificate, so that your hashed password is not sent unencrypted by your browser.

Important

Web security is rapidly changing and evolving. We provide this document as a convenience to the user, and recommend that the user keep current on changes that may impact security, such as new releases of OpenSSL. The Open Web Application Security Project (OWASP) website is a good resource on general security issues and web practices.

You can start the notebook to communicate via a secure protocol mode by setting the certfile option to your self-signed certificate, i.e. mycert.pem, with the command:

$ jupyter server --certfile=mycert.pem --keyfile mykey.key

Tip

A self-signed certificate can be generated with openssl. For example, the following command will create a certificate valid for 365 days with both the key and certificate data written to the same file:

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mykey.key -out mycert.pem

When starting the notebook server, your browser may warn that your self-signed certificate is insecure or unrecognized. If you wish to have a fully compliant self-signed certificate that will not raise warnings, it is possible (but rather involved) to create one, as explained in detail in this tutorial. Alternatively, you may use Let’s Encrypt to acquire a free SSL certificate and follow the steps in Using Let’s Encrypt to set up a public server.

Running a public notebook server#

If you want to access your notebook server remotely via a web browser, you can do so by running a public notebook server. For optimal security when running a public notebook server, you should first secure the server with a password and SSL/HTTPS as described in Securing a Jupyter server.

Start by creating a certificate file and a hashed password, as explained in Securing a Jupyter server.

If you don’t already have one, create a config file for the notebook using the following command line:

$ jupyter server --generate-config

In the ~/.jupyter directory, edit the notebook config file, jupyter_server_config.py. By default, the notebook config file has all fields commented out. The minimum set of configuration options that you should uncomment and edit in jupyter_server_config.py is the following:

# Set options for certfile, ip, password, and toggle off
# browser auto-opening
c.ServerApp.certfile = u'/absolute/path/to/your/certificate/mycert.pem'
c.ServerApp.keyfile = u'/absolute/path/to/your/certificate/mykey.key'
# Set ip to '*' to bind on all interfaces (ips) for the public server
c.ServerApp.ip = '*'
c.ServerApp.password = u'sha1:bcd259ccf...<your hashed password here>'
c.ServerApp.open_browser = False

# It is a good idea to set a known, fixed port for server access
c.ServerApp.port = 9999

You can then start the notebook using the jupyter server command.

Using Let’s Encrypt#

Let’s Encrypt provides free SSL/TLS certificates. You can also set up a public server using a Let’s Encrypt certificate.

Running a public notebook server will be similar when using a Let’s Encrypt certificate with a few configuration changes. Here are the steps:

  1. Create a Let’s Encrypt certificate.

  2. Use Preparing a hashed password to create one.

  3. If you don’t already have config file for the notebook, create one using the following command:

    $ jupyter server --generate-config
    

4. In the ~/.jupyter directory, edit the notebook config file, jupyter_server_config.py. By default, the notebook config file has all fields commented out. The minimum set of configuration options that you should to uncomment and edit in jupyter_server_config.py is the following:

# Set options for certfile, ip, password, and toggle off
# browser auto-opening
c.ServerApp.certfile = u'/absolute/path/to/your/certificate/fullchain.pem'
c.ServerApp.keyfile = u'/absolute/path/to/your/certificate/privkey.pem'
# Set ip to '*' to bind on all interfaces (ips) for the public server
c.ServerApp.ip = '*'
c.ServerApp.password = u'sha1:bcd259ccf...<your hashed password here>'
c.ServerApp.open_browser = False

# It is a good idea to set a known, fixed port for server access
c.ServerApp.port = 9999

You can then start the notebook using the jupyter server command.

Important

Use ‘https’. Keep in mind that when you enable SSL support, you must access the notebook server over https://, not over plain http://. The startup message from the server prints a reminder in the console, but it is easy to overlook this detail and think the server is for some reason non-responsive.

When using SSL, always access the notebook server with ‘https://’.

You may now access the public server by pointing your browser to https://your.host.com:9999 where your.host.com is your public server’s domain.

Firewall Setup#

To function correctly, the firewall on the computer running the jupyter notebook server must be configured to allow connections from client machines on the access port c.ServerApp.port set in jupyter_server_config.py to allow connections to the web interface. The firewall must also allow connections from 127.0.0.1 (localhost) on ports from 49152 to 65535. These ports are used by the server to communicate with the notebook kernels. The kernel communication ports are chosen randomly by ZeroMQ, and may require multiple connections per kernel, so a large range of ports must be accessible.

Running the notebook with a customized URL prefix#

The notebook dashboard, which is the landing page with an overview of the notebooks in your working directory, is typically found and accessed at the default URL http://localhost:8888/.

If you prefer to customize the URL prefix for the notebook dashboard, you can do so through modifying jupyter_server_config.py. For example, if you prefer that the notebook dashboard be located with a sub-directory that contains other ipython files, e.g. http://localhost:8888/ipython/, you can do so with configuration options like the following (see above for instructions about modifying jupyter_server_config.py):

c.ServerApp.base_url = "/ipython/"
Embedding the notebook in another website#

Sometimes you may want to embed the notebook somewhere on your website, e.g. in an IFrame. To do this, you may need to override the Content-Security-Policy to allow embedding. Assuming your website is at https://mywebsite.example.com, you can embed the notebook on your website with the following configuration setting in jupyter_server_config.py:

c.ServerApp.tornado_settings = {
    "headers": {
        "Content-Security-Policy": "frame-ancestors https://mywebsite.example.com 'self' "
    }
}
Using a gateway server for kernel management#

You are now able to redirect the management of your kernels to a Gateway Server (i.e., Jupyter Kernel Gateway or Jupyter Enterprise Gateway) simply by specifying a Gateway url via the following command-line option:

$ jupyter notebook --gateway-url=http://my-gateway-server:8888

the environment:

JUPYTER_GATEWAY_URL=http://my-gateway-server:8888

or in jupyter_notebook_config.py:

c.GatewayClient.url = "http://my-gateway-server:8888"

When provided, all kernel specifications will be retrieved from the specified Gateway server and all kernels will be managed by that server. This option enables the ability to target kernel processes against managed clusters while allowing for the notebook’s management to remain local to the Notebook server.

Known issues#
Proxies#

When behind a proxy, especially if your system or browser is set to autodetect the proxy, the notebook web application might fail to connect to the server’s websockets, and present you with a warning at startup. In this case, you need to configure your system not to use the proxy for the server’s address.

For example, in Firefox, go to the Preferences panel, Advanced section, Network tab, click ‘Settings…’, and add the address of the Jupyter server to the ‘No proxy for’ field.

Content-Security-Policy (CSP)#

Certain security guidelines recommend that servers use a Content-Security-Policy (CSP) header to prevent cross-site scripting vulnerabilities, specifically limiting to default-src: https: when possible. This directive causes two problems with Jupyter. First, it disables execution of inline javascript code, which is used extensively by Jupyter. Second, it limits communication to the https scheme, and prevents WebSockets from working because they communicate via the wss scheme (or ws for insecure communication). Jupyter uses WebSockets for interacting with kernels, so when you visit a server with such a CSP, your browser will block attempts to use wss, which will cause you to see “Connection failed” messages from jupyter notebooks, or simply no response from jupyter terminals. By looking in your browser’s javascript console, you can see any error messages that will explain what is failing.

To avoid these problem, you need to add 'unsafe-inline' and connect-src https: wss: to your CSP header, at least for pages served by jupyter. (That is, you can leave your CSP unchanged for other parts of your website.) Note that multiple CSP headers are allowed, but successive CSP headers can only restrict the policy; they cannot loosen it. For example, if your server sends both of these headers

Content-Security-Policy “default-src https: ‘unsafe-inline’” Content-Security-Policy “connect-src https: wss:”

the first policy will already eliminate wss connections, so the second has no effect. Therefore, you can’t simply add the second header; you have to actually modify your CSP header to look more like this:

Content-Security-Policy “default-src https: ‘unsafe-inline’; connect-src https: wss:”

Docker CMD#

Using jupyter server as a Docker CMD results in kernels repeatedly crashing, likely due to a lack of PID reaping. To avoid this, use the tini init as your Dockerfile ENTRYPOINT:

# Add Tini. Tini operates as a process subreaper for jupyter. This prevents
# kernel crashes.
ENV TINI_VERSION v0.6.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini
RUN chmod +x /usr/bin/tini
ENTRYPOINT ["/usr/bin/tini", "--"]

EXPOSE 8888
CMD ["jupyter", "server", "--port=8888", "--no-browser", "--ip=0.0.0.0"]

Security in the Jupyter Server#

Since access to the Jupyter Server means access to running arbitrary code, it is important to restrict access to the server. For this reason, Jupyter Server uses a token-based authentication that is on by default.

Note

If you enable a password for your server, token authentication is not enabled by default.

When token authentication is enabled, the server uses a token to authenticate requests. This token can be provided to login to the server in three ways:

  • in the Authorization header, e.g.:

    Authorization: token abcdef...
    
  • In a URL parameter, e.g.:

    https://my-server/tree/?token=abcdef...
    
  • In the password field of the login form that will be shown to you if you are not logged in.

When you start a Jupyter server with token authentication enabled (default), a token is generated to use for authentication. This token is logged to the terminal, so that you can copy/paste the URL into your browser:

[I 11:59:16.597 ServerApp] The Jupyter Server is running at:
http://localhost:8888/?token=c8de56fa4deed24899803e93c227592aef6538f93025fe01

If the Jupyter server is going to open your browser automatically, an additional token is generated for launching the browser. This additional token can be used only once, and is used to set a cookie for your browser once it connects. After your browser has made its first request with this one-time-token, the token is discarded and a cookie is set in your browser.

At any later time, you can see the tokens and URLs for all of your running servers with jupyter server list:

$ jupyter server list
Currently running servers:
http://localhost:8888/?token=abc... :: /home/you/notebooks
https://0.0.0.0:9999/?token=123... :: /tmp/public
http://localhost:8889/ :: /tmp/has-password

For servers with token-authentication enabled, the URL in the above listing will include the token, so you can copy and paste that URL into your browser to login. If a server has no token (e.g. it has a password or has authentication disabled), the URL will not include the token argument. Once you have visited this URL, a cookie will be set in your browser and you won’t need to use the token again, unless you switch browsers, clear your cookies, or start a Jupyter server on a new port.

Alternatives to token authentication#

If a generated token doesn’t work well for you, you can set a password for your server. jupyter server password will prompt you for a password, and store the hashed password in your jupyter_server_config.json.

It is possible disable authentication altogether by setting the token and password to empty strings, but this is NOT RECOMMENDED, unless authentication or access restrictions are handled at a different layer in your web application:

c.ServerApp.token = ""
c.ServerApp.password = ""
Authentication and Authorization#

New in version 2.0.

There are two steps to deciding whether to allow a given request to be happen.

The first step is “Authentication” (identifying who is making the request). This is handled by the jupyter_server.auth.IdentityProvider.

Whether a given user is allowed to take a specific action is called “Authorization”, and is handled separately, by an Authorizer.

These two classes may work together, as the information returned by the IdentityProvider is given to the Authorizer when it makes its decisions.

Authentication always takes precedence because if no user is authenticated, no authorization checks need to be made, as all requests requiring authorization must first complete authentication.

Identity Providers#

The jupyter_server.auth.IdentityProvider class is responsible for the “authentication” step, identifying the user making the request, and constructing information about them.

It principally implements two methods.

class jupyter_server.auth.IdentityProvider(**kwargs)#

Interface for providing identity management and authentication.

Two principle methods:

  • get_user() returns a User object for successful authentication, or None for no-identity-found.

  • identity_model() turns a User into a JSONable dict. The default is to use dataclasses.asdict(), and usually shouldn’t need override.

Additional methods can customize authentication.

New in version 2.0.

get_user(handler)#

Get the authenticated user for a request

Must return a jupyter_server.auth.User, though it may be a subclass.

Return None if the request is not authenticated.

_may_ be a coroutine

Return type:

User | None | t.Awaitable[User | None]

identity_model(user)#

Return a User as an Identity model

Return type:

dict[str, Any]

The first is jupyter_server.auth.IdentityProvider.get_user(). This method is given a RequestHandler, and is responsible for deciding whether there is an authenticated user making the request. If the request is authenticated, it should return a jupyter_server.auth.User object representing the authenticated user. It should return None if the request is not authenticated.

The default implementation accepts token or password authentication.

This User object will be available as self.current_user in any request handler. Request methods decorated with tornado’s @web.authenticated decorator will only be allowed if this method returns something.

The User object will be a Python dataclasses.dataclass - jupyter_server.auth.User:

class jupyter_server.auth.User(username, name='', display_name='', initials=None, avatar_url=None, color=None)#

Object representing a User

This or a subclass should be returned from IdentityProvider.get_user

A custom IdentityProvider may return a custom subclass.

The next method an identity provider has is identity_model(). identity_model(user) is responsible for transforming the user object returned from .get_user() into a standard identity model dictionary, for use in the /api/me endpoint.

If your user object is a simple username string or a dict with a username field, you may not need to implement this method, as the default implementation will suffice.

Any required fields missing from the dict returned by this method will be filled-out with defaults. Only username is strictly required, if that is all the information the identity provider has available.

Missing will be derived according to:

  • if name is missing, use username

  • if display_name is missing, use name

Other required fields will be filled with None.

Identity Model#

The identity model is the model accessed at /api/me, and describes the currently authenticated user.

It has the following fields:

username

(string) Unique string identifying the user. Must be non-empty.

name

(string) For-humans name of the user. May be the same as username in systems where only usernames are available.

display_name

(string) Alternate rendering of name for display, such as a nickname. Often the same as name.

initials

(string or null) Short string of initials. Initials should not be derived automatically due to localization issues. May be null if unavailable.

avatar_url

(string or null) URL of an avatar image to be used for the user. May be null if unavailable.

color

(string or null) A CSS color string to use as a preferred color, such as for collaboration cursors. May be null if unavailable.

The default implementation of the identity provider is stateless, meaning it doesn’t store user information on the server side. Instead, it utilizes session cookies to generate and store random user information on the client side.

When a user logs in or authenticates, the server generates a session cookie that is stored on the client side. This session cookie is used to keep track of the identity model between requests. If the client does not support session cookies or fails to send the cookie in subsequent requests, the server will treat each request as coming from a new anonymous user and generate a new set of random user information for each request.

To ensure proper functionality of the identity model and to maintain user context between requests, it’s important for clients to support session cookies and send it in subsequent requests. Failure to do so may result in the server generating a new anonymous user for each request, leading to loss of user context.

Authorization#

Authorization is the second step in allowing an action, after a user has been authenticated by the IdentityProvider.

Authorization in Jupyter Server serves to provide finer grained control of access to its API resources. With authentication, requests are accepted if the current user is known by the server. Thus it can restrain access to specific users, but there is no way to give allowed users more or less permissions. Jupyter Server provides a thin and extensible authorization layer which checks if the current user is authorized to make a specific request.

class jupyter_server.auth.Authorizer(**kwargs)#

Base class for authorizing access to resources in the Jupyter Server.

All authorizers used in Jupyter Server should inherit from this base class and, at the very minimum, implement an is_authorized method with the same signature as in this base class.

The is_authorized method is called by the @authorized decorator in JupyterHandler. If it returns True, the incoming request to the server is accepted; if it returns False, the server returns a 403 (Forbidden) error code.

The authorization check will only be applied to requests that have already been authenticated.

New in version 2.0.

is_authorized(handler, user, action, resource)#

A method to determine if user is authorized to perform action (read, write, or execute) on the resource type.

Parameters:
Returns:

True if user authorized to make request; False, otherwise

Return type:

bool

This is done by calling a is_authorized(handler, user, action, resource) method before each request handler. Each request is labeled as either a “read”, “write”, or “execute” action:

  • “read” wraps all GET and HEAD requests. In general, read permissions grants access to read but not modify anything about the given resource.

  • “write” wraps all POST, PUT, PATCH, and DELETE requests. In general, write permissions grants access to modify the given resource.

  • “execute” wraps all requests to ZMQ/Websocket channels (terminals and kernels). Execute is a special permission that usually corresponds to arbitrary execution, such as via a kernel or terminal. These permissions should generally be considered sufficient to perform actions equivalent to ~all other permissions via other means.

The resource being accessed refers to the resource name in the Jupyter Server’s API endpoints. In most cases, this is the field after /api/. For instance, values for resource in the endpoints provided by the base Jupyter Server package, and the corresponding permissions:

resource

read

write

execute

endpoints

resource name

what can you do with read permissions?

what can you do with write permissions?

what can you do with execute permissions, if anything?

/api/... what endpoints are governed by this resource?

api

read server status (last activity, number of kernels, etc.), OpenAPI specification

/api/status, /api/spec.yaml

csp

report content-security-policy violations

/api/security/csp-report

config

read frontend configuration, such as for notebook extensions

modify frontend configuration

/api/config

contents

read files

modify files (create, modify, delete)

/api/contents, /view, /files

kernels

list kernels, get status of kernels

start, stop, and restart kernels

Connect to kernel websockets, send/recv kernel messages. This generally means arbitrary code execution, and should usually be considered equivalent to having all other permissions.

/api/kernels

kernelspecs

read, list information about available kernels

/api/kernelspecs

nbconvert

render notebooks to other formats via nbconvert. Note: depending on server-side configuration, this *could* involve execution.

/api/nbconvert

server

Shutdown the server

/api/shutdown

sessions

list current sessions (association of documents to kernels)

create, modify, and delete existing sessions, which includes starting, stopping, and deleting kernels.

/api/sessions

terminals

list running terminals and their last activity

start new terminals, stop running terminals

Connect to terminal websockets, execute code in a shell. This generally means arbitrary code execution, and should usually be considered equivalent to having all other permissions.

/api/terminals

Extensions may define their own resources. Extension resources should start with extension_name: to avoid namespace conflicts.

If is_authorized(...) returns True, the request is made; otherwise, a HTTPError(403) (403 means “Forbidden”) error is raised, and the request is blocked.

By default, authorization is turned off—i.e. is_authorized() always returns True and all authenticated users are allowed to make all types of requests. To turn-on authorization, pass a class that inherits from Authorizer to the ServerApp.authorizer_class parameter, implementing a is_authorized() method with your desired authorization logic, as follows:

from jupyter_server.auth import Authorizer


class MyAuthorizationManager(Authorizer):
    """Class for authorizing access to resources in the Jupyter Server.

    All authorizers used in Jupyter Server should inherit from
    AuthorizationManager and, at the very minimum, override and implement
    an `is_authorized` method with the following signature.

    The `is_authorized` method is called by the `@authorized` decorator in
    JupyterHandler. If it returns True, the incoming request to the server
    is accepted; if it returns False, the server returns a 403 (Forbidden) error code.
    """

    def is_authorized(
        self, handler: JupyterHandler, user: Any, action: str, resource: str
    ) -> bool:
        """A method to determine if `user` is authorized to perform `action`
        (read, write, or execute) on the `resource` type.

        Parameters
        ------------
        user : usually a dict or string
            A truthy model representing the authenticated user.
            A username string by default,
            but usually a dict when integrating with an auth provider.

        action : str
            the category of action for the current request: read, write, or execute.

        resource : str
            the type of resource (i.e. contents, kernels, files, etc.) the user is requesting.

        Returns True if user authorized to make request; otherwise, returns False.
        """
        return True  # implement your authorization logic here

The is_authorized() method will automatically be called whenever a handler is decorated with @authorized (from jupyter_server.auth), similarly to the @authenticated decorator for authentication (from tornado.web).

Security in notebook documents#

As Jupyter Server become more popular for sharing and collaboration, the potential for malicious people to attempt to exploit the notebook for their nefarious purposes increases. IPython 2.0 introduced a security model to prevent execution of untrusted code without explicit user input.

The problem#

The whole point of Jupyter is arbitrary code execution. We have no desire to limit what can be done with a notebook, which would negatively impact its utility.

Unlike other programs, a Jupyter notebook document includes output. Unlike other documents, that output exists in a context that can execute code (via Javascript).

The security problem we need to solve is that no code should execute just because a user has opened a notebook that they did not write. Like any other program, once a user decides to execute code in a notebook, it is considered trusted, and should be allowed to do anything.

Our security model#
  • Untrusted HTML is always sanitized

  • Untrusted Javascript is never executed

  • HTML and Javascript in Markdown cells are never trusted

  • Outputs generated by the user are trusted

  • Any other HTML or Javascript (in Markdown cells, output generated by others) is never trusted

  • The central question of trust is “Did the current user do this?”

The details of trust#

When a notebook is executed and saved, a signature is computed from a digest of the notebook’s contents plus a secret key. This is stored in a database, writable only by the current user. By default, this is located at:

~/.local/share/jupyter/nbsignatures.db  # Linux
~/Library/Jupyter/nbsignatures.db       # OS X
%APPDATA%/jupyter/nbsignatures.db       # Windows

Each signature represents a series of outputs which were produced by code the current user executed, and are therefore trusted.

When you open a notebook, the server computes its signature, and checks if it’s in the database. If a match is found, HTML and Javascript output in the notebook will be trusted at load, otherwise it will be untrusted.

Any output generated during an interactive session is trusted.

Updating trust#

A notebook’s trust is updated when the notebook is saved. If there are any untrusted outputs still in the notebook, the notebook will not be trusted, and no signature will be stored. If all untrusted outputs have been removed (either via Clear Output or re-execution), then the notebook will become trusted.

While trust is updated per output, this is only for the duration of a single session. A newly loaded notebook file is either trusted or not in its entirety.

Explicit trust#

Sometimes re-executing a notebook to generate trusted output is not an option, either because dependencies are unavailable, or it would take a long time. Users can explicitly trust a notebook in two ways:

  • At the command-line, with:

    jupyter trust /path/to/notebook.ipynb
    
  • After loading the untrusted notebook, with File / Trust Notebook

These two methods simply load the notebook, compute a new signature, and add that signature to the user’s database.

Reporting security issues#

If you find a security vulnerability in Jupyter, either a failure of the code to properly implement the model described here, or a failure of the model itself, please report it to security@ipython.org.

If you prefer to encrypt your security reports, you can use this PGP public key.

Affected use cases#

Some use cases that work in Jupyter 1.0 became less convenient in 2.0 as a result of the security changes. We do our best to minimize these annoyances, but security is always at odds with convenience.

Javascript and CSS in Markdown cells#

While never officially supported, it had become common practice to put hidden Javascript or CSS styling in Markdown cells, so that they would not be visible on the page. Since Markdown cells are now sanitized (by Google Caja), all Javascript (including click event handlers, etc.) and CSS will be stripped.

We plan to provide a mechanism for notebook themes, but in the meantime styling the notebook can only be done via either custom.css or CSS in HTML output. The latter only have an effect if the notebook is trusted, because otherwise the output will be sanitized just like Markdown.

Collaboration#

When collaborating on a notebook, people probably want to see the outputs produced by their colleagues’ most recent executions. Since each collaborator’s key will differ, this will result in each share starting in an untrusted state. There are three basic approaches to this:

  • re-run notebooks when you get them (not always viable)

  • explicitly trust notebooks via jupyter trust or the notebook menu (annoying, but easy)

  • share a notebook signatures database, and use configuration dedicated to the collaboration while working on the project.

To share a signatures database among users, you can configure:

c.NotebookNotary.data_dir = "/path/to/signature_dir"

to specify a non-default path to the SQLite database (of notebook hashes, essentially).

Configuring Logging#

Jupyter Server (and Jupyter Server extension applications such as Jupyter Lab) are Traitlets applications.

By default Traitlets applications log to stderr. You can configure them to log to other locations e.g. log files.

Logging is configured via the logging_config “trait” which accepts a logging.config.dictConfig() object. For more information look for Application.logging_config in Config file and command line options.

Examples#
Jupyter Server#

A minimal example which logs Jupyter Server output to a file:

c.ServerApp.logging_config = {
    "version": 1,
    "handlers": {
        "logfile": {
            "class": "logging.FileHandler",
            "level": "DEBUG",
            "filename": "jupyter_server.log",
        },
    },
    "loggers": {
        "ServerApp": {
            "level": "DEBUG",
            "handlers": ["console", "logfile"],
        },
    },
}

Note

To keep the default behaviour of logging to stderr ensure the console handler (provided by Traitlets) is included in the list of handlers.

Warning

Be aware that the ServerApp log may contain security tokens. If redirecting to log files ensure they have appropriate permissions.

Jupyter Server Extension Applications (e.g. Jupyter Lab)#

An example which logs both Jupyter Server and Jupyter Lab output to a file:

Note

Because Jupyter Server and its extension applications are separate Traitlets applications their logging must be configured separately.

c.ServerApp.logging_config = {
    "version": 1,
    "handlers": {
        "logfile": {
            "class": "logging.FileHandler",
            "level": "DEBUG",
            "filename": "jupyter_server.log",
            "formatter": "my_format",
        },
    },
    "formatters": {
        "my_format": {
            "format": "%(asctime)s %(levelname)-8s %(name)-15s %(message)s",
            "datefmt": "%Y-%m-%d %H:%M:%S",
        },
    },
    "loggers": {
        "ServerApp": {
            "level": "DEBUG",
            "handlers": ["console", "logfile"],
        },
    },
}

c.LabApp.logging_config = {
    "version": 1,
    "handlers": {
        "logfile": {
            "class": "logging.FileHandler",
            "level": "DEBUG",
            "filename": "jupyter_server.log",
            "formatter": "my_format",
        },
    },
    "formatters": {
        "my_format": {
            "format": "%(asctime)s %(levelname)-8s %(name)-15s %(message)s",
            "datefmt": "%Y-%m-%d %H:%M:%S",
        },
    },
    "loggers": {
        "LabApp": {
            "level": "DEBUG",
            "handlers": ["console", "logfile"],
        },
    },
}

Note

The configured application name should match the logger name e.g. c.LabApp.logging_config defines a logger called LabApp.

Tip

This diff modifies the example to log Jupyter Server and Jupyter Lab output to different files:

--- before
+++ after
 c.LabApp.logging_config = {
     'version': 1,
     'handlers': {
         'logfile': {
             'class': 'logging.FileHandler',
             'level': 'DEBUG',
-            'filename': 'jupyter_server.log',
+            'filename': 'jupyter_lab.log',
             'formatter': 'my_format',
         },
     },

Documentation for Developers#

These pages target people writing Jupyter Web applications and server extensions, or people who need to dive deeper in Jupyter Server’s REST API and configuration system.

Architecture Diagrams#

This page describes the Jupyter Server architecture and the main workflows. This information is useful for developers who want to understand how Jupyter Server components are connected and how the principal workflows look like.

To make changes for these diagrams, use the Draw.io open source tool to edit the png file.

Jupyter Server Architecture#

The Jupyter Server system can be seen in the figure below:

Jupyter Server Architecture

Jupyter Server contains the following components:

  • ServerApp is the main Tornado-based application which connects all components together.

  • Config Manager initializes configuration for the ServerApp. You can define custom classes for the Jupyter Server managers using this config and change ServerApp settings. Follow the Config File Guide to learn about configuration settings and how to build custom config.

  • Custom Extensions allow you to create the custom Server’s REST API endpoints. Follow the Extension Guide to know more about extending ServerApp with extra request handlers.

  • Gateway Server is a web server that, when configured, provides access to Jupyter kernels running on other hosts. There are different ways to create a gateway server. If your ServerApp needs to communicate with remote kernels residing within resource-managed clusters, you can use Enterprise Gateway, otherwise, you can use Kernel Gateway, where kernels run locally to the gateway server.

  • Contents Manager and File Contents Manager are responsible for serving Notebook on the file system. Session Manager uses Contents Manager to receive kernel path. Follow the Contents API guide to learn about Contents Manager.

  • Session Manager processes users’ Sessions. When a user starts a new kernel, Session Manager starts a process to provision kernel for the user and generates a new Session ID. Each opened Notebook has a separate Session, but different Notebook kernels can use the same Session. That is useful if the user wants to share data across various opened Notebooks. Session Manager uses SQLite3 database to store the Session information. The database is stored in memory by default, but can be configured to save to disk.

  • Mapping Kernel Manager is responsible for managing the lifecycles of the kernels running within the ServerApp. It starts a new kernel for a user’s Session and facilitates interrupt, restart, and shutdown operations against the kernel.

  • Jupyter Client library is used by Jupyter Server to work with the Notebook kernels.

    • Kernel Manager manages a single kernel for the Notebook. To know more about Kernel Manager, follow the Jupyter Client APIs documentation.

    • Kernel Spec Manager parses files with JSON specification for a kernels, and provides a list of available kernel configurations. To learn about Kernel Spec Manager, check the Jupyter Client guide.

Create Session Workflow#

The create Session workflow can be seen in the figure below:

Create Session Workflow

When a user starts a new kernel, the following steps occur:

  1. The Notebook client sends the POST /api/sessions request to Jupyter Server. This request has all necessary data, such as Notebook name, type, path, and kernel name.

  2. Session Manager asks Contents Manager for the kernel file system path based on the input data.

  3. Session Manager sends kernel path to Mapping Kernel Manager.

  4. Mapping Kernel Manager starts the kernel create process by using Multi Kernel Manager and Kernel Manager. You can learn more about Multi Kernel Manager in the Jupyter Client APIs.

  5. Kernel Manager uses the provisioner layer to launch a new kernel.

  6. Kernel Provisioner is responsible for launching kernels based on the kernel specification. If the kernel specification doesn’t define a provisioner, it uses Local Provisioner to launch the kernel. You can use Kernel Provisioner Base and Kernel Provisioner Factory to create custom provisioners.

  7. Kernel Spec Manager gets the kernel specification from the JSON file. The specification is located in kernel.json file.

  8. Once Kernel Provisioner launches the kernel, Kernel Manager generates the new kernel ID for Session Manager.

  9. Session Manager saves the new Session data to the SQLite3 database (Session ID, Notebook path, Notebook name, Notebook type, and kernel ID).

  10. Notebook client receives the created Session data.

Delete Session Workflow#

The delete Session workflow can be seen in the figure below:

Delete Session Workflow

When a user stops a kernel, the following steps occur:

  1. The Notebook client sends the DELETE /api/sessions/{session_id} request to Jupyter Server. This request has the Session ID that kernel is currently using.

  2. Session Manager gets the Session data from the SQLite3 database and sends the kernel ID to Mapping Kernel Manager.

  3. Mapping Kernel Manager starts the kernel shutdown process by using Multi Kernel Manager and Kernel Manager.

  4. Kernel Manager determines the mode of interrupt from the Kernel Spec Manager. It supports Signal and Message interrupt modes. By default, the Signal interrupt mode is being used.

    • When the interrupt mode is Signal, the Kernel Provisioner interrupts the kernel with the SIGINT operating system signal (although other provisioner implementations may use a different approach).

    • When interrupt mode is Message, Session sends the “interrupt_request” message on the control channel.

  5. After interrupting kernel, Session sends the “shutdown_request” message on the control channel.

  6. Kernel Manager waits for the kernel shutdown. After the timeout, and if it detects the kernel process is still running, the Kernel Manager terminates the kernel sending a SIGTERM operating system signal (or provisioner equivalent). If it finds the kernel process has not terminated, the Kernel Manager will follow up with a SIGKILL operating system signal (or provisioner equivalent) to ensure the kernel’s termination.

  7. Kernel Manager cleans up the kernel resources. It removes kernel’s interprocess communication ports, closes control socket, and releases Shell, IOPub, StdIn, Control, and Heartbeat ports.

  8. When shutdown is finished, Session Manager deletes the Session data from the SQLite3 database and responses 204 status code to the Notebook client.

Depending on Jupyter Server#

If your project depends directly on Jupyter Server, be sure to watch Jupyter Server’s ChangeLog and pin your project to a version that works for your application. Major releases represent possible backwards-compatibility breaking API changes or features.

When a new major version in released on PyPI, a branch for that version will be created in this repository, and the version of the master branch will be bumped to the next major version number. That way, the master branch always reflects the latest un-released version.

To install the latest patch of a given version:

> pip install jupyter_server --upgrade

To pin your jupyter_server install to a specific version:

> pip install jupyter_server==1.0.0

The REST API#

An interactive version is available here.

GET /api/#

Get the Jupyter Server version

This endpoint returns only the Jupyter Server version. It does not require any authentication.

Status Codes:
  • 200 OK – Jupyter Server version information

Response JSON Object:
  • version (string) – The Jupyter Server version number as a string.

GET /api/contents/{path}#

Get contents of file or directory

A client can optionally specify a type and/or format argument via URL parameter. When given, the Contents service shall return a model in the requested type and/or format. If the request cannot be satisfied, e.g. type=text is requested, but the file is binary, then the request shall fail with 400 and have a JSON response containing a ‘reason’ field, with the value ‘bad format’ or ‘bad type’, depending on what was requested.

Parameters:
  • path (string) – file path

Query Parameters:
  • type (string) – File type (‘file’, ‘directory’)

  • format (string) – How file content should be returned (‘text’, ‘base64’)

  • content (integer) – Return content (0 for no content, 1 for return content)

  • hash (integer) – May return hash hexdigest string of content and the hash algorithm (0 for no hash - default, 1 for return hash). It may be ignored by the content manager.

Status Codes:
Response Headers:
  • Last-Modified – Last modified date for file

Response JSON Object:
  • content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)

  • created (string) – Creation timestamp (required)

  • format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)

  • hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.

  • hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.

  • last_modified (string) – Last modified timestamp (required)

  • mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)

  • name (string) – Name of file or directory, equivalent to the last part of the path (required)

  • path (string) – Full path for file or directory (required)

  • size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.

  • type (string) – Type of content (required)

  • writable (boolean) – indicates whether the requester has permission to edit the file (required)

POST /api/contents/{path}#

Create a new file in the specified path

A POST to /api/contents/path creates a New untitled, empty file or directory. A POST to /api/contents/path with body {‘copy_from’: ‘/path/to/OtherNotebook.ipynb’} creates a new copy of OtherNotebook in path.

Parameters:
  • path (string) – file path

Request JSON Object:
  • copy_from (string) –

  • ext (string) –

  • type (string) –

Status Codes:
Response Headers:
  • Location – URL for the new file

Response JSON Object:
  • content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)

  • created (string) – Creation timestamp (required)

  • format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)

  • hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.

  • hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.

  • last_modified (string) – Last modified timestamp (required)

  • mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)

  • name (string) – Name of file or directory, equivalent to the last part of the path (required)

  • path (string) – Full path for file or directory (required)

  • size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.

  • type (string) – Type of content (required)

  • writable (boolean) – indicates whether the requester has permission to edit the file (required)

PATCH /api/contents/{path}#

Rename a file or directory without re-uploading content

Parameters:
  • path (string) – file path

Request JSON Object:
  • path (string) – New path for file or directory

Status Codes:
Response Headers:
  • Location – Updated URL for the file or directory

Response JSON Object:
  • content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)

  • created (string) – Creation timestamp (required)

  • format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)

  • hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.

  • hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.

  • last_modified (string) – Last modified timestamp (required)

  • mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)

  • name (string) – Name of file or directory, equivalent to the last part of the path (required)

  • path (string) – Full path for file or directory (required)

  • size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.

  • type (string) – Type of content (required)

  • writable (boolean) – indicates whether the requester has permission to edit the file (required)

PUT /api/contents/{path}#

Save or upload file.

Saves the file in the location specified by name and path. PUT is very similar to POST, but the requester specifies the name, whereas with POST, the server picks the name.

Parameters:
  • path (string) – file path

Request JSON Object:
  • content (string) – The actual body of the document excluding directory type

  • format (string) – File format (‘json’, ‘text’, ‘base64’)

  • name (string) – The new filename if changed

  • path (string) – New path for file or directory

  • type (string) – Path dtype (‘notebook’, ‘file’, ‘directory’)

Status Codes:
Response Headers:
  • Location – Updated URL for the file or directory

  • Location – URL for the file or directory

Response JSON Object:
  • content (string) – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)

  • created (string) – Creation timestamp (required)

  • format (string) – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)

  • hash (string) – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.

  • hash_algorithm (string) – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.

  • last_modified (string) – Last modified timestamp (required)

  • mimetype (string) – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)

  • name (string) – Name of file or directory, equivalent to the last part of the path (required)

  • path (string) – Full path for file or directory (required)

  • size (integer) – The size of the file or notebook in bytes. If no size is provided, defaults to null.

  • type (string) – Type of content (required)

  • writable (boolean) – indicates whether the requester has permission to edit the file (required)

  • content – The content, if requested (otherwise null). Will be an array if type is ‘directory’ (required)

  • created – Creation timestamp (required)

  • format – Format of content (one of null, ‘text’, ‘base64’, ‘json’) (required)

  • hash – [optional] The hexdigest hash string of content, if requested (otherwise null). It cannot be null if hash_algorithm is defined.

  • hash_algorithm – [optional] The algorithm used to produce the hash, if requested (otherwise null). It cannot be null if hash is defined.

  • last_modified – Last modified timestamp (required)

  • mimetype – The mimetype of a file. If content is not null, and type is ‘file’, this will contain the mimetype of the file, otherwise this will be null. (required)

  • name – Name of file or directory, equivalent to the last part of the path (required)

  • path – Full path for file or directory (required)

  • size – The size of the file or notebook in bytes. If no size is provided, defaults to null.

  • type – Type of content (required)

  • writable – indicates whether the requester has permission to edit the file (required)

DELETE /api/contents/{path}#

Delete a file in the given path

Parameters:
  • path (string) – file path

Status Codes:
Response Headers:
  • Location – URL for the removed file

GET /api/contents/{path}/checkpoints#

Get a list of checkpoints for a file

List checkpoints for a given file. There will typically be zero or one results.

Parameters:
  • path (string) – file path

Status Codes:
Response JSON Object:
  • [].id (string) – Unique id for the checkpoint. (required)

  • [].last_modified (string) – Last modified timestamp (required)

POST /api/contents/{path}/checkpoints#

Create a new checkpoint for a file

Create a new checkpoint with the current state of a file. With the default FileContentsManager, only one checkpoint is supported, so creating new checkpoints clobbers existing ones.

Parameters:
  • path (string) – file path

Status Codes:
Response Headers:
  • Location – URL for the checkpoint

Response JSON Object:
  • id (string) – Unique id for the checkpoint. (required)

  • last_modified (string) – Last modified timestamp (required)

POST /api/contents/{path}/checkpoints/{checkpoint_id}#

Restore a file to a particular checkpointed state

Parameters:
  • path (string) – file path

  • checkpoint_id (string) – Checkpoint id for a file

Status Codes:
DELETE /api/contents/{path}/checkpoints/{checkpoint_id}#

Delete a checkpoint

Parameters:
  • path (string) – file path

  • checkpoint_id (string) – Checkpoint id for a file

Status Codes:
GET /api/sessions/{session}#

Get session

Parameters:
  • session (string) – session uuid

Status Codes:
Response JSON Object:
  • id (string) –

  • kernel (any) – Kernel information

  • name (string) – name of the session

  • path (string) – path to the session

  • type (string) – session type

PATCH /api/sessions/{session}#

This can be used to rename the session.

Parameters:
  • session (string) – session uuid

Request JSON Object:
  • id (string) –

  • kernel (any) – Kernel information

  • name (string) – name of the session

  • path (string) – path to the session

  • type (string) – session type

Status Codes:
Response JSON Object:
  • id (string) –

  • kernel (any) – Kernel information

  • name (string) – name of the session

  • path (string) – path to the session

  • type (string) – session type

DELETE /api/sessions/{session}#

Delete a session

Parameters:
  • session (string) – session uuid

Status Codes:
  • 204 No Content – Session (and kernel) were deleted

  • 410 Gone – Kernel was deleted before the session, and the session was not deleted (TODO - check to make sure session wasn’t deleted)

GET /api/sessions#

List available sessions

Status Codes:
  • 200 OK – List of current sessions

Response JSON Object:
  • [].id (string) –

  • [].kernel (any) – Kernel information

  • [].name (string) – name of the session

  • [].path (string) – path to the session

  • [].type (string) – session type

POST /api/sessions#

Create a new session, or return an existing session if a session of the same name already exists

Request JSON Object:
  • id (string) –

  • kernel (any) – Kernel information

  • name (string) – name of the session

  • path (string) – path to the session

  • type (string) – session type

Status Codes:
Response Headers:
  • Location – URL for session commands

Response JSON Object:
  • id (string) –

  • kernel (any) – Kernel information

  • name (string) – name of the session

  • path (string) – path to the session

  • type (string) – session type

GET /api/kernels#

List the JSON data for all kernels that are currently running

Status Codes:
  • 200 OK – List of currently-running kernel uuids

Response JSON Object:
  • [] (any) – Kernel information

POST /api/kernels#

Start a kernel and return the uuid

Request JSON Object:
  • name (string) – Kernel spec name (defaults to default kernel spec for server) (required)

  • path (string) – API path from root to the cwd of the kernel

Status Codes:
Response Headers:
  • Location – Model for started kernel

GET /api/kernels/{kernel_id}#

Get kernel information

Parameters:
  • kernel_id (string) – kernel uuid

Status Codes:
  • 200 OK – Kernel information

DELETE /api/kernels/{kernel_id}#

Kill a kernel and delete the kernel id

Parameters:
  • kernel_id (string) – kernel uuid

Status Codes:
POST /api/kernels/{kernel_id}/interrupt#

Interrupt a kernel

Parameters:
  • kernel_id (string) – kernel uuid

Status Codes:
POST /api/kernels/{kernel_id}/restart#

Restart a kernel

Parameters:
  • kernel_id (string) – kernel uuid

Status Codes:
Response Headers:
  • Location – URL for kernel commands

GET /api/kernelspecs#

Get kernel specs

Status Codes:
Response JSON Object:
  • default (string) – Default kernel name

  • kernelspecs (object) –

GET /api/config/{section_name}#

Get a configuration section by name

Parameters:
  • section_name (string) – Name of config section

Status Codes:
  • 200 OK – Configuration object

PATCH /api/config/{section_name}#

Update a configuration section by name

Parameters:
  • section_name (string) – Name of config section

Status Codes:
  • 200 OK – Configuration object

GET /api/terminals#

Get available terminals

Status Codes:
Response JSON Object:
  • [].last_activity (string) – ISO 8601 timestamp for the last-seen activity on this terminal. Use this to identify which terminals have been inactive since a given time. Timestamps will be UTC, indicated ‘Z’ suffix.

  • [].name (string) – name of terminal (required)

POST /api/terminals#

Create a new terminal

Status Codes:
Response JSON Object:
  • last_activity (string) – ISO 8601 timestamp for the last-seen activity on this terminal. Use this to identify which terminals have been inactive since a given time. Timestamps will be UTC, indicated ‘Z’ suffix.

  • name (string) – name of terminal (required)

GET /api/terminals/{terminal_id}#

Get a terminal session corresponding to an id.

Parameters:
  • terminal_id (string) – ID of terminal session

Status Codes:
Response JSON Object:
  • last_activity (string) – ISO 8601 timestamp for the last-seen activity on this terminal. Use this to identify which terminals have been inactive since a given time. Timestamps will be UTC, indicated ‘Z’ suffix.

  • name (string) – name of terminal (required)

DELETE /api/terminals/{terminal_id}#

Delete a terminal session corresponding to an id.

Parameters:
  • terminal_id (string) – ID of terminal session

Status Codes:
GET /api/me#

Get the identity of the currently authenticated user. If present, a `permissions` argument may be specified to check what actions the user currently is authorized to take.

Query Parameters:
  • permissions (string) – JSON-serialized dictionary of {"resource": ["action",]} (dict of lists of strings) to check. The same dictionary structure will be returned, containing only the actions for which the user is authorized.

Status Codes:
  • 200 OK – The user’s identity and permissions

Response JSON Object:
  • identity (any) – The identity of the currently authenticated user

  • permissions (object) – A dict of the form: {"resource": ["action",]} containing only the AUTHORIZED subset of resource+actions from the permissions specified in the request. If no permission checks were made in the request, this will be empty.

GET /api/status#

Get the current status/activity of the server.

Status Codes:
  • 200 OK – The current status of the server

GET /api/spec.yaml#

Get the current spec for the notebook server’s APIs.

Status Codes:
  • 200 OK – The current spec for the notebook server’s APIs.

Server Extensions#

A Jupyter Server extension is typically a module or package that extends to Server’s REST API/endpoints—i.e. adds extra request handlers to Server’s Tornado Web Application.

For examples of jupyter server extensions, see the homepage.

To get started writing your own extension, see the simple examples in the examples folder in the GitHub jupyter_server repository.

Authoring a basic server extension#

The simplest way to write a Jupyter Server extension is to write an extension module with a _load_jupyter_server_extension function. This function should take a single argument, an instance of the ServerApp.

def _load_jupyter_server_extension(serverapp: jupyter_server.serverapp.ServerApp):
    """
    This function is called when the extension is loaded.
    """
    pass
Adding extension endpoints#

The easiest way to add endpoints and handle incoming requests is to subclass the JupyterHandler (which itself is a subclass of Tornado’s RequestHandler).

from jupyter_server.base.handlers import JupyterHandler
import tornado


class MyExtensionHandler(JupyterHandler):
    @tornado.web.authenticated
    def get(self):
        ...

    @tornado.web.authenticated
    def post(self):
        ...

Note

It is best practice to wrap each handler method with the authenticated decorator to ensure that each request is authenticated by the server.

Then add this handler to Jupyter Server’s Web Application through the _load_jupyter_server_extension function.

def _load_jupyter_server_extension(serverapp: jupyter_server.serverapp.ServerApp):
    """
    This function is called when the extension is loaded.
    """
    handlers = [("/myextension/hello", MyExtensionHandler)]
    serverapp.web_app.add_handlers(".*$", handlers)
Making an extension discoverable#

To make this extension discoverable to Jupyter Server, first define a _jupyter_server_extension_points() function at the root of the module/ package. This function returns metadata describing how to load the extension. Usually, this requires a module key with the import path to the extension’s _load_jupyter_server_extension function.

def _jupyter_server_extension_points():
    """
    Returns a list of dictionaries with metadata describing
    where to find the `_load_jupyter_server_extension` function.
    """
    return [{"module": "my_extension"}]

Second, add the extension to the ServerApp’s jpserver_extensions trait. This can be manually added by users in their jupyter_server_config.py file,

c.ServerApp.jpserver_extensions = {"my_extension": True}

or loaded from a JSON file in the jupyter_server_config.d directory under one of Jupyter’s paths. (See the Distributing a server extension section for details on how to automatically enabled your extension when users install it.)

{"ServerApp": {"jpserver_extensions": {"my_extension": true}}}
Authoring a configurable extension application#

Some extensions are full-fledged client applications that sit on top of the Jupyter Server. For example, JupyterLab is a server extension. It can be launched from the command line, configured by CLI or config files, and serves+loads static assets behind the server (i.e. html templates, Javascript, etc.)

Jupyter Server offers a convenient base class, ExtensionsApp, that handles most of the boilerplate code for building such extensions.

Anatomy of an ExtensionApp#

An ExtensionApp:

  • has traits.

  • is configurable (from file or CLI)

  • has a name (see the name trait).

  • has an entrypoint, jupyter <name>.

  • can serve static content from the /static/<name>/ endpoint.

  • can add new endpoints to the Jupyter Server.

The basic structure of an ExtensionApp is shown below:

from jupyter_server.extension.application import ExtensionApp


class MyExtensionApp(ExtensionApp):
    # -------------- Required traits --------------
    name = "myextension"
    default_url = "/myextension"
    load_other_extensions = True
    file_url_prefix = "/render"

    # --- ExtensionApp traits you can configure ---
    static_paths = [...]
    template_paths = [...]
    settings = {...}
    handlers = [...]

    # ----------- add custom traits below ---------
    ...

    def initialize_settings(self):
        ...
        # Update the self.settings trait to pass extra
        # settings to the underlying Tornado Web Application.
        self.settings.update({"<trait>": ...})

    def initialize_handlers(self):
        ...
        # Extend the self.handlers trait
        self.handlers.extend(...)

    def initialize_templates(self):
        ...
        # Change the jinja templating environment

    async def stop_extension(self):
        ...
        # Perform any required shut down steps

The ExtensionApp uses the following methods and properties to connect your extension to the Jupyter server. You do not need to define a _load_jupyter_server_extension function for these apps. Instead, overwrite the pieces below to add your custom settings, handlers and templates:

Methods

  • initialize_settings(): adds custom settings to the Tornado Web Application.

  • initialize_handlers(): appends handlers to the Tornado Web Application.

  • initialize_templates(): initialize the templating engine (e.g. jinja2) for your frontend.

  • stop_extension(): called on server shut down.

Properties

  • name: the name of the extension

  • default_url: the default URL for this extension—i.e. the landing page for this extension when launched from the CLI.

  • load_other_extensions: a boolean enabling/disabling other extensions when launching this extension directly.

  • file_url_prefix: the prefix URL added when opening a document directly from the command line. For example, classic Notebook uses /notebooks to open a document at http://localhost:8888/notebooks/path/to/notebook.ipynb.

ExtensionApp request handlers#

ExtensionApp Request Handlers have a few extra properties.

  • config: the ExtensionApp’s config object.

  • server_config: the ServerApp’s config object.

  • name: the name of the extension to which this handler is linked.

  • static_url(): a method that returns the url to static files (prefixed with /static/<name>).

Jupyter Server provides a convenient mixin class for adding these properties to any JupyterHandler. For example, the basic server extension handler in the section above becomes:

from jupyter_server.base.handlers import JupyterHandler
from jupyter_server.extension.handler import ExtensionHandlerMixin
import tornado


class MyExtensionHandler(ExtensionHandlerMixin, JupyterHandler):
    @tornado.web.authenticated
    def get(self):
        ...

    @tornado.web.authenticated
    def post(self):
        ...
Jinja templating from frontend extensions#

Many Jupyter frontend applications use Jinja for basic HTML templating. Since this is common enough, Jupyter Server provides some extra mixin that integrate Jinja with Jupyter server extensions.

Use ExtensionAppJinjaMixin to automatically add a Jinja templating environment to an ExtensionApp. This adds a <name>_jinja2_env setting to Tornado Web Server’s settings that will be used by request handlers.

from jupyter_server.extension.application import ExtensionApp, ExtensionAppJinjaMixin


class MyExtensionApp(ExtensionAppJinjaMixin, ExtensionApp):
    ...

Pair the example above with ExtensionHandlers that also inherit the ExtensionHandlerJinjaMixin mixin. This will automatically load HTML templates from the Jinja templating environment created by the ExtensionApp.

from jupyter_server.base.handlers import JupyterHandler
from jupyter_server.extension.handler import (
    ExtensionHandlerMixin,
    ExtensionHandlerJinjaMixin,
)
import tornado


class MyExtensionHandler(
    ExtensionHandlerMixin, ExtensionHandlerJinjaMixin, JupyterHandler
):
    @tornado.web.authenticated
    def get(self):
        ...

    @tornado.web.authenticated
    def post(self):
        ...

Note

The mixin classes in this example must come before the base classes, ExtensionApp and ExtensionHandler.

Making an ExtensionApp discoverable#

To make an ExtensionApp discoverable by Jupyter Server, add the app key+value pair to the _jupyter_server_extension_points() function example above:

from myextension import MyExtensionApp


def _jupyter_server_extension_points():
    """
    Returns a list of dictionaries with metadata describing
    where to find the `_load_jupyter_server_extension` function.
    """
    return [{"module": "myextension", "app": MyExtensionApp}]
Launching an ExtensionApp#

To launch the application, simply call the ExtensionApp’s launch_instance method.

launch_instance = MyFrontend.launch_instance
launch_instance()

To make your extension executable from anywhere on your system, point an entry-point at the launch_instance method in the extension’s setup.py:

from setuptools import setup


setup(
    name="myfrontend",
    # ...
    entry_points={
        "console_scripts": ["jupyter-myextension = myextension:launch_instance"]
    },
)
ExtensionApp as a classic Notebook server extension#

An extension that extends ExtensionApp should still work with the old Tornado server from the classic Jupyter Notebook. The ExtensionApp class provides a method, load_classic_server_extension, that handles the extension initialization. Simply define a load_jupyter_server_extension reference pointing at the load_classic_server_extension method:

# This is typically defined in the root `__init__.py`
# file of the extension package.
load_jupyter_server_extension = MyExtensionApp.load_classic_server_extension

If the extension is enabled, the extension will be loaded when the server starts.

Distributing a server extension#

Putting it all together, authors can distribute their extension following this steps:

  1. Add a _jupyter_server_extension_points() function at the extension’s root.

    This function should likely live in the __init__.py found at the root of the extension package. It will look something like this:

    # Found in the __init__.py of package
    
    
    def _jupyter_server_extension_points():
        return [{"module": "myextension.app", "app": MyExtensionApp}]
    
  2. Create an extension by writing a _load_jupyter_server_extension() function or subclassing ExtensionApp.

    This is where the extension logic will live (i.e. custom extension handlers, config, etc). See the sections above for more information on how to create an extension.

  3. Add the following JSON config file to the extension package.

    The file should be named after the extension (e.g. myextension.json) and saved in a subdirectory of the package with the prefix: jupyter-config/jupyter_server_config.d/. The extension package will have a similar structure to this example:

    myextension
    ├── myextension/
    │   ├── __init__.py
    │   └── app.py
    ├── jupyter-config/
    │   └── jupyter_server_config.d/
    │       └── myextension.json
    └── setup.py
    

    The contents of the JSON file will tell Jupyter Server to load the extension when a user installs the package:

    {
        "ServerApp": {
            "jpserver_extensions": {
                "myextension": true
            }
        }
    }
    

    When the extension is installed, this JSON file will be copied to the jupyter_server_config.d directory found in one of Jupyter’s paths.

    Users can toggle the enabling/disableing of extension using the command:

    jupyter server extension disable myextension
    

    which will change the boolean value in the JSON file above.

  4. Create a setup.py that automatically enables the extension.

    Add a few extra lines the extension package’s setup function

    from setuptools import setup
    
    setup(
        name="myextension",
        # ...
        include_package_data=True,
        data_files=[
            (
                "etc/jupyter/jupyter_server_config.d",
                ["jupyter-config/jupyter_server_config.d/myextension.json"],
            ),
        ],
    )
    
Migrating an extension to use Jupyter Server#

If you’re a developer of a classic Notebook Server extension, your extension should be able to work with both the classic notebook server and jupyter_server.

There are a few key steps to make this happen:

  1. Point Jupyter Server to the load_jupyter_server_extension function with a new reference name.

    The load_jupyter_server_extension function was the key to loading a server extension in the classic Notebook Server. Jupyter Server expects the name of this function to be prefixed with an underscore—i.e. _load_jupyter_server_extension. You can easily achieve this by adding a reference to the old function name with the new name in the same module.

    def load_jupyter_server_extension(nb_server_app):
        ...
    
    
    # Reference the old function name with the new function name.
    
    _load_jupyter_server_extension = load_jupyter_server_extension
    
  2. Add new data files to your extension package that enable it with Jupyter Server.

    This new file can go next to your classic notebook server data files. Create a new sub-directory, jupyter_server_config.d, and add a new .json file there:

    myextension
    ├── myextension/
    │   ├── __init__.py
    │   └── app.py
    ├── jupyter-config/
    │   └── jupyter_notebook_config.d/
    │       └── myextension.json
    │   └── jupyter_server_config.d/└── myextension.json
    └── setup.py
    

    The new .json file should look something like this (you’ll notice the changes in the configured class and trait names):

    {
        "ServerApp": {
            "jpserver_extensions": {
                "myextension": true
            }
        }
    }
    

    Update your extension package’s setup.py so that the data-files are moved into the jupyter configuration directories when users download the package.

    from setuptools import setup
    
    setup(
        name="myextension",
        # ...
        include_package_data=True,
        data_files=[
            (
                "etc/jupyter/jupyter_server_config.d",
                ["jupyter-config/jupyter_server_config.d/myextension.json"],
            ),
            (
                "etc/jupyter/jupyter_notebook_config.d",
                ["jupyter-config/jupyter_notebook_config.d/myextension.json"],
            ),
        ],
    )
    
  3. (Optional) Point extension at the new favicon location.

    The favicons in the Jupyter Notebook have been moved to a new location in Jupyter Server. If your extension is using one of these icons, you’ll want to add a set of redirect handlers this. (In ExtensionApp, this is handled automatically).

    This usually means adding a chunk to your load_jupyter_server_extension function similar to this:

    def load_jupyter_server_extension(nb_server_app):
        web_app = nb_server_app.web_app
        host_pattern = ".*$"
        base_url = web_app.settings["base_url"]
    
        # Add custom extensions handler.
        custom_handlers = [
            # ...
        ]
    
        # Favicon redirects.
        favicon_redirects = [
            (
                url_path_join(base_url, "/static/favicons/favicon.ico"),
                RedirectHandler,
                {
                    "url": url_path_join(
                        serverapp.base_url, "static/base/images/favicon.ico"
                    )
                },
            ),
            (
                url_path_join(base_url, "/static/favicons/favicon-busy-1.ico"),
                RedirectHandler,
                {
                    "url": url_path_join(
                        serverapp.base_url, "static/base/images/favicon-busy-1.ico"
                    )
                },
            ),
            (
                url_path_join(base_url, "/static/favicons/favicon-busy-2.ico"),
                RedirectHandler,
                {
                    "url": url_path_join(
                        serverapp.base_url, "static/base/images/favicon-busy-2.ico"
                    )
                },
            ),
            (
                url_path_join(base_url, "/static/favicons/favicon-busy-3.ico"),
                RedirectHandler,
                {
                    "url": url_path_join(
                        serverapp.base_url, "static/base/images/favicon-busy-3.ico"
                    )
                },
            ),
            (
                url_path_join(base_url, "/static/favicons/favicon-file.ico"),
                RedirectHandler,
                {
                    "url": url_path_join(
                        serverapp.base_url, "static/base/images/favicon-file.ico"
                    )
                },
            ),
            (
                url_path_join(base_url, "/static/favicons/favicon-notebook.ico"),
                RedirectHandler,
                {
                    "url": url_path_join(
                        serverapp.base_url, "static/base/images/favicon-notebook.ico"
                    )
                },
            ),
            (
                url_path_join(base_url, "/static/favicons/favicon-terminal.ico"),
                RedirectHandler,
                {
                    "url": url_path_join(
                        serverapp.base_url, "static/base/images/favicon-terminal.ico"
                    )
                },
            ),
            (
                url_path_join(base_url, "/static/logo/logo.png"),
                RedirectHandler,
                {"url": url_path_join(serverapp.base_url, "static/base/images/logo.png")},
            ),
        ]
    
        web_app.add_handlers(host_pattern, custom_handlers + favicon_redirects)
    

File save hooks#

You can configure functions that are run whenever a file is saved. There are two hooks available:

  • ContentsManager.pre_save_hook runs on the API path and model with content. This can be used for things like stripping output that people don’t like adding to VCS noise.

  • FileContentsManager.post_save_hook runs on the filesystem path and model without content. This could be used to commit changes after every save, for instance.

They are both called with keyword arguments:

pre_save_hook(model=model, path=path, contents_manager=cm)
post_save_hook(model=model, os_path=os_path, contents_manager=cm)
Examples#

These can both be added to jupyter_server_config.py.

A pre-save hook for stripping output:

def scrub_output_pre_save(model, **kwargs):
    """scrub output before saving notebooks"""
    # only run on notebooks
    if model['type'] != 'notebook':
        return
    # only run on nbformat v4
    if model['content']['nbformat'] != 4:
        return

    for cell in model['content']['cells']:
        if cell['cell_type'] != 'code':
            continue
        cell['outputs'] = []
        cell['execution_count'] = None

c.FileContentsManager.pre_save_hook = scrub_output_pre_save

A post-save hook to make a script equivalent whenever the notebook is saved (replacing the --script option in older versions of the notebook):

import io
import os
from jupyter_server.utils import to_api_path

_script_exporter = None


def script_post_save(model, os_path, contents_manager, **kwargs):
    """convert notebooks to Python script after save with nbconvert

    replaces `ipython notebook --script`
    """
    from nbconvert.exporters.script import ScriptExporter

    if model["type"] != "notebook":
        return

    global _script_exporter

    if _script_exporter is None:
        _script_exporter = ScriptExporter(parent=contents_manager)

    log = contents_manager.log

    base, ext = os.path.splitext(os_path)
    py_fname = base + ".py"
    script, resources = _script_exporter.from_filename(os_path)
    script_fname = base + resources.get("output_extension", ".txt")
    log.info("Saving script /%s", to_api_path(script_fname, contents_manager.root_dir))

    with io.open(script_fname, "w", encoding="utf-8") as f:
        f.write(script)


c.FileContentsManager.post_save_hook = script_post_save

This could be a simple call to jupyter nbconvert --to script, but spawning the subprocess every time is quite slow.

Note

Assigning a new hook to e.g. c.FileContentsManager.pre_save_hook will override any existing one.

If you want to add new hooks and keep existing ones, you should use e.g.:

contents_manager.register_pre_save_hook(script_pre_save)
contents_manager.register_post_save_hook(script_post_save)

Hooks will then be called in the order they were registered.

Contents API#

The Jupyter Notebook web application provides a graphical interface for creating, opening, renaming, and deleting files in a virtual filesystem.

The ContentsManager class defines an abstract API for translating these interactions into operations on a particular storage medium. The default implementation, FileContentsManager, uses the local filesystem of the server for storage and straightforwardly serializes notebooks into JSON. Users can override these behaviors by supplying custom subclasses of ContentsManager.

This section describes the interface implemented by ContentsManager subclasses. We refer to this interface as the Contents API.

Data Model#
Filesystem Entities#

ContentsManager methods represent virtual filesystem entities as dictionaries, which we refer to as models.

Models may contain the following entries:

Key

Type

Info

name

unicode

Basename of the entity.

path

unicode

Full (API-style) path to the entity.

type

unicode

The entity type. One of "notebook", "file" or "directory".

created

datetime

Creation date of the entity.

last_modified

datetime

Last modified date of the entity.

content

variable

The “content” of the entity. (See Below)

mimetype

unicode or None

The mimetype of content, if any. (See Below)

format

unicode or None

The format of content, if any. (See Below)

[optional] hash

unicode or None

The hash of the contents. It cannot be null if hash_algorithm is defined.

[optional] hash_algorithm

unicode or None

The algorithm used to compute hash value. It cannot be null if hash is defined.

Certain model fields vary in structure depending on the type field of the model. There are three model types: notebook, file, and directory.

  • notebook models
    • The format field is always "json".

    • The mimetype field is always None.

    • The content field contains a nbformat.notebooknode.NotebookNode representing the .ipynb file represented by the model. See the NBFormat documentation for a full description.

    • The hash field a hexdigest string of the hash value of the file. If ContentManager.get not support hash, it should always be None.

    • hash_algorithm is the algorithm used to compute the hash value.

  • file models
    • The format field is either "text" or "base64".

    • The mimetype field is text/plain for text-format models and application/octet-stream for base64-format models.

    • The content field is always of type unicode. For text-format file models, content simply contains the file’s bytes after decoding as UTF-8. Non-text (base64) files are read as bytes, base64 encoded, and then decoded as UTF-8.

    • The hash field a hexdigest string of the hash value of the file. If ContentManager.get not support hash, it should always be None.

    • hash_algorithm is the algorithm used to compute the hash value.

  • directory models
    • The format field is always "json".

    • The mimetype field is always None.

    • The content field contains a list of content-free models representing the entities in the directory.

    • The hash field is always None.

Note

In certain circumstances, we don’t need the full content of an entity to complete a Contents API request. In such cases, we omit the mimetype, content, and format keys from the model. This most commonly occurs when listing a directory, in which circumstance we represent files within the directory as content-less models to avoid having to recursively traverse and serialize the entire filesystem.

Sample Models

# Notebook Model with Content and Hash
{
    "content": {
        "metadata": {},
        "nbformat": 4,
        "nbformat_minor": 0,
        "cells": [
            {
                "cell_type": "markdown",
                "metadata": {},
                "source": "Some **Markdown**",
            },
        ],
    },
    "created": datetime(2015, 7, 25, 19, 50, 19, 19865),
    "format": "json",
    "last_modified": datetime(2015, 7, 25, 19, 50, 19, 19865),
    "mimetype": None,
    "name": "a.ipynb",
    "path": "foo/a.ipynb",
    "type": "notebook",
    "writable": True,
    "hash": "f5e43a0b1c2e7836ab3b4d6b1c35c19e2558688de15a6a14e137a59e4715d34b",
    "hash_algorithm": "sha256",
}

# Notebook Model without Content
{
    "content": None,
    "created": datetime.datetime(2015, 7, 25, 20, 17, 33, 271931),
    "format": None,
    "last_modified": datetime.datetime(2015, 7, 25, 20, 17, 33, 271931),
    "mimetype": None,
    "name": "a.ipynb",
    "path": "foo/a.ipynb",
    "type": "notebook",
    "writable": True,
}
API Paths#

ContentsManager methods represent the locations of filesystem resources as API-style paths. Such paths are interpreted as relative to the root directory of the notebook server. For compatibility across systems, the following guarantees are made:

  • Paths are always unicode, not bytes.

  • Paths are not URL-escaped.

  • Paths are always forward-slash (/) delimited, even on Windows.

  • Leading and trailing slashes are stripped. For example, /foo/bar/buzz/ becomes foo/bar/buzz.

  • The empty string ("") represents the root directory.

Writing a Custom ContentsManager#

The default ContentsManager is designed for users running the notebook as an application on a personal computer. It stores notebooks as .ipynb files on the local filesystem, and it maps files and directories in the Notebook UI to files and directories on disk. It is possible to override how notebooks are stored by implementing your own custom subclass of ContentsManager. For example, if you deploy the notebook in a context where you don’t trust or don’t have access to the filesystem of the notebook server, it’s possible to write your own ContentsManager that stores notebooks and files in a database.

Required Methods#

A minimal complete implementation of a custom ContentsManager must implement the following methods:

ContentsManager.get(path[, content, type, ...])

Get a file or directory model.

ContentsManager.save(model, path)

Save a file or directory model to path.

ContentsManager.delete_file(path)

Delete the file or directory at path.

ContentsManager.rename_file(old_path, new_path)

Rename a file or directory.

ContentsManager.file_exists([path])

Does a file exist at the given path?

ContentsManager.dir_exists(path)

Does a directory exist at the given path?

ContentsManager.is_hidden(path)

Is path a hidden directory or file?

You may be required to specify a Checkpoints object, as the default one, FileCheckpoints, could be incompatible with your custom ContentsManager.

Customizing Checkpoints#

Customized Checkpoint definitions allows behavior to be altered and extended.

The Checkpoints and GenericCheckpointsMixin classes (from jupyter_server.services.contents.checkpoints) have reusable code and are intended to be used together, but require the following methods to be implemented.

Checkpoints.rename_checkpoint(checkpoint_id, ...)

Rename a single checkpoint from old_path to new_path.

Checkpoints.list_checkpoints(path)

Return a list of checkpoints for a given file

Checkpoints.delete_checkpoint(checkpoint_id, ...)

delete a checkpoint for a file

GenericCheckpointsMixin.create_file_checkpoint(...)

Create a checkpoint of the current state of a file

GenericCheckpointsMixin.create_notebook_checkpoint(nb, ...)

Create a checkpoint of the current state of a file

GenericCheckpointsMixin.get_file_checkpoint(...)

Get the content of a checkpoint for a non-notebook file.

GenericCheckpointsMixin.get_notebook_checkpoint(...)

Get the content of a checkpoint for a notebook.

No-op example#

Here is an example of a no-op checkpoints object - note the mixin comes first. The docstrings indicate what each method should do or return for a more complete implementation.

class NoOpCheckpoints(GenericCheckpointsMixin, Checkpoints):
    """requires the following methods:"""

    def create_file_checkpoint(self, content, format, path):
        """-> checkpoint model"""

    def create_notebook_checkpoint(self, nb, path):
        """-> checkpoint model"""

    def get_file_checkpoint(self, checkpoint_id, path):
        """-> {'type': 'file', 'content': <str>, 'format': {'text', 'base64'}}"""

    def get_notebook_checkpoint(self, checkpoint_id, path):
        """-> {'type': 'notebook', 'content': <output of nbformat.read>}"""

    def delete_checkpoint(self, checkpoint_id, path):
        """deletes a checkpoint for a file"""

    def list_checkpoints(self, path):
        """returns a list of checkpoint models for a given file,
        default just does one per file
        """
        return []

    def rename_checkpoint(self, checkpoint_id, old_path, new_path):
        """renames checkpoint from old path to new path"""

See GenericFileCheckpoints in notebook.services.contents.filecheckpoints for a more complete example.

Testing#

jupyter_server.services.contents.tests includes several test suites written against the abstract Contents API. This means that an excellent way to test a new ContentsManager subclass is to subclass our tests to make them use your ContentsManager.

Note

PGContents is an example of a complete implementation of a custom ContentsManager. It stores notebooks and files in PostgreSQL and encodes directories as SQL relations. PGContents also provides an example of how to reuse the notebook’s tests.

Asynchronous Support#

An asynchronous version of the Contents API is available to run slow IO processes concurrently.

  • AsyncContentsManager

  • AsyncFileContentsManager

  • AsyncLargeFileManager

  • AsyncCheckpoints

  • AsyncGenericCheckpointsMixin

Note

In most cases, the non-asynchronous Contents API is performant for local filesystems. However, if the Jupyter Notebook web application is interacting with a high-latent virtual filesystem, you may see performance gains by using the asynchronous version. For example, if you’re experiencing terminal lag in the web application due to the slow and blocking file operations, the asynchronous version can reduce the lag. Before opting in, comparing both non-async and async options’ performances is recommended.

WebSocket kernel wire protocols#

The Jupyter Server needs to pass messages between kernels and the Jupyter web application. Kernels use ZeroMQ sockets, and the web application uses a WebSocket.

ZeroMQ wire protocol#

The kernel wire protocol over ZeroMQ takes advantage of multipart messages, allowing to decompose a message into parts and to send and receive them unmerged. The following table shows the message format (the beginning has been omitted for clarity):

Format of a kernel message over ZeroMQ socket (indices refer to parts, not bytes)#

0

1

2

3

4

5

header

parent_header

metadata

content

buffer_0

buffer_1

See also the Jupyter Client documentation.

Note that a set of ZeroMQ sockets, one for each channel (shell, iopub, etc.), are multiplexed into one WebSocket. Thus, the channel name must be encoded in WebSocket messages.

WebSocket protocol negotiation#

When opening a WebSocket, the Jupyter web application can optionally provide a list of subprotocols it supports (see e.g. the MDN documentation). If nothing is provided (empty list), then the Jupyter Server assumes the default protocol will be used. Otherwise, the Jupyter Server must select one of the provided subprotocols, or none of them. If none of them is selected, the Jupyter Server must reply with an empty string, which means that the default protocol will be used.

Default WebSocket protocol#

The Jupyter Server must support the default protocol, in which a kernel message is serialized over WebSocket as follows:

Format of a kernel message over WebSocket (indices refer to bytes)#

0

4

8

offset_0

offset_1

offset_2

offset_0

offset_1

offset_2

msg

buffer_0

buffer_1

Where:

  • offset_0 is the position of the kernel message (msg) from the beginning of this message, in bytes.

  • offset_1 is the position of the first binary buffer (buffer_0) from the beginning of this message, in bytes (optional).

  • offset_2 is the position of the second binary buffer (buffer_1) from the beginning of this message, in bytes (optional).

  • msg is the kernel message, excluding binary buffers and including the channel name, as a UTF8-encoded stringified JSON.

  • buffer_0 is the first binary buffer (optional).

  • buffer_1 is the second binary buffer (optional).

The message can be deserialized by parsing msg as a JSON object (after decoding it to a string):

msg = {
    "channel": channel,
    "header": header,
    "parent_header": parent_header,
    "metadata": metadata,
    "content": content,
}

Then retrieving the channel name, and updating with the buffers, if any:

buffers = {
    [
        buffer_0,
        buffer_1
        # ...
    ]
}
v1.kernel.websocket.jupyter.org protocol#

The Jupyter Server can optionally support the v1.kernel.websocket.jupyter.org protocol, in which a kernel message is serialized over WebSocket as follows:

Format of a kernel message over WebSocket (indices refer to bytes)#

0

8

16

8*offset_number

offset_0

offset_1

offset_2

offset_3

offset_4

offset_5

offset_6

offset_number

offset_0

offset_1

offset_n

channel

header

parent_header

metadata

content

buffer_0

buffer_1

Where:

  • offset_number is a 64-bit (little endian) unsigned integer.

  • offset_0 to offset_n are 64-bit (little endian) unsigned integers (with n=offset_number-1).

  • channel is a UTF-8 encoded string containing the channel for the message (shell, iopub, etc.).

  • header, parent_header, metadata, and content are UTF-8 encoded JSON text representing the given part of a message in the Jupyter message protocol.

  • offset_n is the number of bytes in the message.

  • The message can be deserialized from the bin_msg serialized message as follows (Python code):

import json

channel = bin_msg[offset_0:offset_1].decode("utf-8")
header = json.loads(bin_msg[offset_1:offset_2])
parent_header = json.loads(bin_msg[offset_2:offset_3])
metadata = json.loads(bin_msg[offset_3:offset_4])
content = json.loads(bin_msg[offset_4:offset_5])
buffer_0 = bin_msg[offset_5:offset_6]
buffer_1 = bin_msg[offset_6:offset_7]
# ...
last_buffer = bin_msg[offset_n_minus_1:offset_n]

jupyter_server#

jupyter_server package#
Subpackages#
jupyter_server.auth package#
Submodules#

An Authorizer for use in the Jupyter server.

The default authorizer (AllowAllAuthorizer) allows all authenticated requests

New in version 2.0.

class jupyter_server.auth.authorizer.AllowAllAuthorizer(**kwargs)#

Bases: Authorizer

A no-op implementation of the Authorizer

This authorizer allows all authenticated requests.

New in version 2.0.

is_authorized(handler, user, action, resource)#

This method always returns True.

All authenticated users are allowed to do anything in the Jupyter Server.

Return type:

bool

class jupyter_server.auth.authorizer.Authorizer(**kwargs)#

Bases: LoggingConfigurable

Base class for authorizing access to resources in the Jupyter Server.

All authorizers used in Jupyter Server should inherit from this base class and, at the very minimum, implement an is_authorized method with the same signature as in this base class.

The is_authorized method is called by the @authorized decorator in JupyterHandler. If it returns True, the incoming request to the server is accepted; if it returns False, the server returns a 403 (Forbidden) error code.

The authorization check will only be applied to requests that have already been authenticated.

New in version 2.0.

identity_provider#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

is_authorized(handler, user, action, resource)#

A method to determine if user is authorized to perform action (read, write, or execute) on the resource type.

Parameters:
Returns:

True if user authorized to make request; False, otherwise

Return type:

bool

Decorator for layering authorization into JupyterHandlers.

jupyter_server.auth.decorator.allow_unauthenticated(method)#

A decorator for tornado.web.RequestHandler methods that allows any user to make the following request.

Selectively disables the ‘authentication’ layer of REST API which is active when ServerApp.allow_unauthenticated_access = False.

To be used exclusively on endpoints which may be considered public, for example the login page handler.

New in version 2.13.

Parameters:

method (bound callable) – the endpoint method to remove authentication from.

Return type:

TypeVar(FuncT, bound= Callable[..., Any])

jupyter_server.auth.decorator.authorized(action=None, resource=None, message=None)#

A decorator for tornado.web.RequestHandler methods that verifies whether the current user is authorized to make the following request.

Helpful for adding an ‘authorization’ layer to a REST API.

New in version 2.0.

Parameters:
  • action (str) – the type of permission or action to check.

  • resource (str or None) – the name of the resource the action is being authorized to access.

  • message (str or none) – a message for the unauthorized action.

Return type:

TypeVar(FuncT, bound= Callable[..., Any])

jupyter_server.auth.decorator.ws_authenticated(method)#

A decorator for websockets derived from WebSocketHandler that authenticates user before allowing to proceed.

Differently from tornado.web.authenticated, does not redirect to the login page, which would be meaningless for websockets.

New in version 2.13.

Parameters:

method (bound callable) – the endpoint method to add authentication for.

Return type:

TypeVar(FuncT, bound= Callable[..., Any])

Identity Provider interface

This defines the _authentication_ layer of Jupyter Server, to be used in combination with Authorizer for _authorization_.

New in version 2.0.

class jupyter_server.auth.identity.IdentityProvider(**kwargs)#

Bases: LoggingConfigurable

Interface for providing identity management and authentication.

Two principle methods:

  • get_user() returns a User object for successful authentication, or None for no-identity-found.

  • identity_model() turns a User into a JSONable dict. The default is to use dataclasses.asdict(), and usually shouldn’t need override.

Additional methods can customize authentication.

New in version 2.0.

property auth_enabled#

Is authentication enabled?

Should always be True, but may be False in rare, insecure cases where requests with no auth are allowed.

Previously: LoginHandler.get_login_available

auth_header_pat = re.compile('(token|bearer)\\s+(.+)', re.IGNORECASE)#

Clear the login cookie, effectively logging out the session.

Return type:

None

cookie_name: str | Unicode[str, str | bytes]#

username-${Host}.

Type:

Name of the cookie to set for persisting login. Default

cookie_options#

Extra keyword arguments to pass to set_secure_cookie. See tornado’s set_secure_cookie docs for details.

generate_anonymous_user(handler)#

Generate a random anonymous user.

For use when a single shared token is used, but does not identify a user.

Return type:

User

Return the login cookie name

Uses IdentityProvider.cookie_name, if defined. Default is to generate a string taking host into account to avoid collisions for multiple servers on one hostname with different ports.

Return type:

str

get_handlers()#

Return list of additional handlers for this identity provider

For example, an OAuth callback handler.

Return type:

list[tuple[str, object]]

Extra keyword arguments to pass to get_secure_cookie. See tornado’s get_secure_cookie docs for details.

get_token(handler)#

Get the user token from a request

Default: :rtype: str | None

  • in URL parameters: ?token=<token>

  • in header: Authorization: token <token>

get_user(handler)#

Get the authenticated user for a request

Must return a jupyter_server.auth.User, though it may be a subclass.

Return None if the request is not authenticated.

_may_ be a coroutine

Return type:

User | None | t.Awaitable[User | None]

Get user from a cookie

Calls user_from_cookie to deserialize cookie value

Return type:

User | None | t.Awaitable[User | None]

async get_user_token(handler)#

Identify the user based on a token in the URL or Authorization header

Returns: - uuid if authenticated - None if not

Return type:

User | None

identity_model(user)#

Return a User as an Identity model

Return type:

dict[str, Any]

is_token_authenticated(handler)#

Returns True if handler has been token authenticated. Otherwise, False.

Login with a token is used to signal certain things, such as: :rtype: bool

  • permit access to REST API

  • xsrf protection

  • skip origin-checks for scripts

property login_available#

Whether a LoginHandler is needed - and therefore whether the login page should be displayed.

login_handler_class#

The login handler class to use, if any.

property logout_available#

Whether a LogoutHandler is needed.

logout_handler_class#

The logout handler class to use.

need_token: bool | Bool[bool, t.Union[bool, int]]#

A boolean (True, False) trait.

process_login_form(handler)#

Process login form data

Return authenticated User if successful, None if not.

Return type:

User | None

Specify whether login cookie should have the secure property (HTTPS-only).Only needed when protocol-detection gives the wrong answer due to proxies.

Call this on handlers to set the login cookie for success

Return type:

None

should_check_origin(handler)#

Should the Handler check for CORS origin validation?

Origin check should be skipped for token-authenticated requests.

Returns: - True, if Handler must check for valid CORS origin. - False, if Handler should skip origin check since requests are token-authenticated.

Return type:

bool

token: str | Unicode[str, str | bytes]#

Token used for authenticating first-time connections to the server.

The token can be read from the file referenced by JUPYTER_TOKEN_FILE or set directly with the JUPYTER_TOKEN environment variable.

When no password is enabled, the default is to generate a new, random token.

Setting to an empty string disables authentication altogether, which is NOT RECOMMENDED.

Prior to 2.0: configured as ServerApp.token

token_generated = False#

Inverse of user_to_cookie

Return type:

User | None

Serialize a user to a string for storage in a cookie

If overriding in a subclass, make sure to define user_from_cookie as well.

Default is just the user’s username.

Return type:

str

validate_security(app, ssl_options=None)#

Check the application’s security.

Show messages, or abort if necessary, based on the security configuration.

Return type:

None

class jupyter_server.auth.identity.LegacyIdentityProvider(**kwargs)#

Bases: PasswordIdentityProvider

Legacy IdentityProvider for use with custom LoginHandlers

Login configuration has moved from LoginHandler to IdentityProvider in Jupyter Server 2.0.

property auth_enabled#

Return whether any auth is enabled

get_user(handler)#

Get the user.

Return type:

User | None

is_token_authenticated(handler)#

Whether we are token authenticated.

Return type:

bool

property login_available: bool#

Whether a LoginHandler is needed - and therefore whether the login page should be displayed.

settings#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

should_check_origin(handler)#

Whether we should check origin.

Return type:

bool

validate_security(app, ssl_options=None)#

Validate security.

Return type:

None

class jupyter_server.auth.identity.PasswordIdentityProvider(**kwargs)#

Bases: IdentityProvider

A password identity provider.

allow_password_change#

Allow password to be changed at login for the Jupyter server.

While logging in with a token, the Jupyter server UI will give the opportunity to the user to enter a new password at the same time that will replace the token login mechanism.

This can be set to False to prevent changing password from the UI/API.

property auth_enabled: bool#

Return whether any auth is enabled

hashed_password#

Hashed password to use for web authentication.

To generate, type in a python/IPython shell:

from jupyter_server.auth import passwd; passwd()

The string should be of the form type:salt:hashed-password.

property login_available: bool#

Whether a LoginHandler is needed - and therefore whether the login page should be displayed.

passwd_check(password)#

Check password against our stored hashed password

password_required#

Forces users to use a password for the Jupyter server. This is useful in a multi user environment, for instance when everybody in the LAN can access each other’s machine through ssh.

In such a case, serving on localhost is not secure since any user can connect to the Jupyter server via ssh.

process_login_form(handler)#

Process login form data

Return authenticated User if successful, None if not.

Return type:

User | None

validate_security(app, ssl_options=None)#

Handle security validation.

Return type:

None

class jupyter_server.auth.identity.User(username, name='', display_name='', initials=None, avatar_url=None, color=None)#

Bases: object

Object representing a User

This or a subclass should be returned from IdentityProvider.get_user

avatar_url: str | None = None#
color: str | None = None#
display_name: str = ''#
fill_defaults()#

Fill out default fields in the identity model

  • Ensures all values are defined

  • Fills out derivative values for name fields fields

  • Fills out null values for optional fields

initials: str | None = None#
name: str = ''#
username: str#

Tornado handlers for logging into the Jupyter Server.

class jupyter_server.auth.login.LegacyLoginHandler(application, request, **kwargs)#

Bases: LoginFormHandler

Legacy LoginHandler, implementing most custom auth configuration.

Deprecated in jupyter-server 2.0. Login configuration has moved to IdentityProvider.

auth_header_pat = re.compile('token\\s+(.+)', re.IGNORECASE)#
classmethod get_login_available(settings)#

DEPRECATED in 2.0, use IdentityProvider API

classmethod get_token(handler)#

Get the user token from a request

Default:

  • in URL parameters: ?token=<token>

  • in header: Authorization: token <token>

classmethod get_user(handler)#

DEPRECATED in 2.0, use IdentityProvider API

DEPRECATED in 2.0, use IdentityProvider API

classmethod get_user_token(handler)#

DEPRECATED in 2.0, use IdentityProvider API

property hashed_password#
classmethod is_token_authenticated(handler)#

DEPRECATED in 2.0, use IdentityProvider API

passwd_check(a, b)#

Check a passwd.

classmethod password_from_settings(settings)#

DEPRECATED in 2.0, use IdentityProvider API

post()#

Post a login form.

Call this on handlers to set the login cookie for success

classmethod should_check_origin(handler)#

DEPRECATED in 2.0, use IdentityProvider API

classmethod validate_security(app, ssl_options=None)#

DEPRECATED in 2.0, use IdentityProvider API

class jupyter_server.auth.login.LoginFormHandler(application, request, **kwargs)#

Bases: JupyterHandler

The basic tornado login handler

accepts login form, passed to IdentityProvider.process_login_form.

get()#

Get the login form.

post()#

Post a login.

jupyter_server.auth.login.LoginHandler#

alias of LegacyLoginHandler

Tornado handlers for logging out of the Jupyter Server.

class jupyter_server.auth.logout.LogoutHandler(application, request, **kwargs)#

Bases: JupyterHandler

An auth logout handler.

get()#

Handle a logout.

Password generation for the Jupyter Server.

jupyter_server.auth.security.passwd(passphrase=None, algorithm='argon2')#

Generate hashed password and salt for use in server configuration.

In the server configuration, set c.ServerApp.password to the generated string.

Parameters:
  • passphrase (str) – Password to hash. If unspecified, the user is asked to input and verify a password.

  • algorithm (str) – Hashing algorithm to use (e.g, ‘sha1’ or any argument supported by hashlib.new(), or ‘argon2’).

Returns:

hashed_passphrase – Hashed password, in the format ‘hash_algorithm:salt:passphrase_hash’.

Return type:

str

Examples

>>> passwd("mypassword")  
'argon2:...'
jupyter_server.auth.security.passwd_check(hashed_passphrase, passphrase)#

Verify that a given passphrase matches its hashed version.

Parameters:
  • hashed_passphrase (str) – Hashed password, in the format returned by passwd.

  • passphrase (str) – Passphrase to validate.

Returns:

valid – True if the passphrase matches the hash.

Return type:

bool

Examples

>>> myhash = passwd("mypassword")
>>> passwd_check(myhash, "mypassword")
True
>>> passwd_check(myhash, "otherpassword")
False
>>> passwd_check("sha1:0e112c3ddfce:a68df677475c2b47b6e86d0467eec97ac5f4b85a", "mypassword")
True
jupyter_server.auth.security.persist_config(config_file=None, mode=384)#

Context manager that can be used to modify a config object

On exit of the context manager, the config will be written back to disk, by default with user-only (600) permissions.

jupyter_server.auth.security.set_password(password=None, config_file=None)#

Ask user for password, store it in JSON configuration file

A module with various utility methods for authorization in Jupyter Server.

jupyter_server.auth.utils.get_anonymous_username()#

Get a random user-name based on the moons of Jupyter. This function returns names like “Anonymous Io” or “Anonymous Metis”.

Return type:

str

jupyter_server.auth.utils.get_regex_to_resource_map()#

Returns a dictionary with all of Jupyter Server’s request handler URL regex patterns mapped to their resource name.

e.g. { “/api/contents/<regex_pattern>”: “contents”, …}

jupyter_server.auth.utils.match_url_to_resource(url, regex_mapping=None)#

Finds the JupyterHandler regex pattern that would match the given URL and returns the resource name (str) of that handler.

e.g. /api/contents/… returns “contents”

jupyter_server.auth.utils.warn_disabled_authorization()#

DEPRECATED, does nothing

Module contents#
jupyter_server.base package#
Submodules#

Provides access to variables pertaining to specific call contexts.

class jupyter_server.base.call_context.CallContext#

Bases: object

CallContext essentially acts as a namespace for managing context variables.

Although not required, it is recommended that any “file-spanning” context variable names (i.e., variables that will be set or retrieved from multiple files or services) be added as constants to this class definition.

JUPYTER_HANDLER: str = 'JUPYTER_HANDLER'#

Provides access to the current request handler once set.

classmethod context_variable_names()#

Returns a list of variable names set for this call context.

Returns:

names – A list of variable names set for this call context.

Return type:

List[str]

classmethod get(name)#

Returns the value corresponding the named variable relative to this context.

If the named variable doesn’t exist, None will be returned.

Parameters:

name (str) – The name of the variable to get from the call context

Returns:

value – The value associated with the named variable for this call context

Return type:

Any

classmethod set(name, value)#

Sets the named variable to the specified value in the current call context.

Parameters:
  • name (str) – The name of the variable to store into the call context

  • value (Any) – The value of the variable to store into the call context

Return type:

None

Base Tornado handlers for the Jupyter server.

class jupyter_server.base.handlers.APIHandler(application, request, **kwargs)#

Bases: JupyterHandler

Base class for API handlers

property content_security_policy: str#

The default Content-Security-Policy header

Can be overridden by defining Content-Security-Policy in settings[‘headers’]

finish(*args, **kwargs)#

Finish an API response.

Return type:

Future[Any]

get_login_url()#

Get the login url.

Return type:

str

options(*args, **kwargs)#

Get the options.

Return type:

None

async prepare()#

Prepare an API response.

Return type:

None

update_api_activity()#

Update last_activity of API requests

Return type:

None

write_error(status_code, **kwargs)#

APIHandler errors are JSON, not human pages

Return type:

None

class jupyter_server.base.handlers.APIVersionHandler(application, request, **kwargs)#

Bases: APIHandler

An API handler for the server version.

get()#

Get the server version info.

Return type:

None

class jupyter_server.base.handlers.AuthenticatedFileHandler(application, request, **kwargs)#

Bases: JupyterHandler, StaticFileHandler

static files should only be accessible when logged in

auth_resource = 'contents'#
compute_etag()#

Compute the etag.

Return type:

str | None

property content_security_policy: str#

The default Content-Security-Policy header

Can be overridden by defining Content-Security-Policy in settings[‘headers’]

get(path, **kwargs)#

Get a file by path.

Return type:

Awaitable[None]

get_content_type()#

Get the content type.

Return type:

str

head(path)#

Get the head response for a path.

Return type:

Awaitable[None]

set_headers()#

Set the headers.

Return type:

None

validate_absolute_path(root, absolute_path)#

Validate and return the absolute path.

Requires tornado 3.1

Adding to tornado’s own handling, forbids the serving of hidden files.

Return type:

str

class jupyter_server.base.handlers.AuthenticatedHandler(application, request, **kwargs)#

Bases: RequestHandler

A RequestHandler with an authenticated user.

property authorizer: Authorizer#
property base_url: str#

Clear a login cookie.

Return type:

None

property content_security_policy: str#

The default Content-Security-Policy header

Can be overridden by defining Content-Security-Policy in settings[‘headers’]

property cookie_name: str#

Force a cookie clear.

Return type:

None

get_current_user()#

Get the current user.

Return type:

str

property identity_provider: IdentityProvider#
property logged_in: bool#

Is a user currently logged in?

property login_available: bool#

May a user proceed to log in?

This returns True if login capability is available, irrespective of whether the user is already logged in or not.

property login_handler: Any#

Return the login handler for this application, if any.

set_default_headers()#

Set the default headers.

Return type:

None

skip_check_origin()#

Ask my login_handler if I should skip the origin_check

For example: in the default LoginHandler, if a request is token-authenticated, origin checking should be skipped.

Return type:

bool

property token: str | None#

Return the login token for this application, if any.

property token_authenticated: bool#

Have I been authenticated with a token?

class jupyter_server.base.handlers.FileFindHandler(application, request, **kwargs)#

Bases: JupyterHandler, StaticFileHandler

subclass of StaticFileHandler for serving files from a search path

The setting “static_immutable_cache” can be set up to serve some static file as immutable (e.g. file name containing a hash). The setting is a list of base URL, every static file URL starting with one of those will be immutable.

compute_etag()#

Compute the etag.

Return type:

str | None

get(path, include_body=True)#
Return type:

Coroutine[Any, Any, None]

classmethod get_absolute_path(roots, path)#

locate a file to serve on our static file search path

Return type:

str

head(path)#
Return type:

Awaitable[None]

initialize(path, default_filename=None, no_cache_paths=None)#

Initialize the file find handler.

Return type:

None

root: tuple[str]#
set_headers()#

Set the headers.

Return type:

None

validate_absolute_path(root, absolute_path)#

check if the file should be served (raises 404, 403, etc.)

Return type:

str | None

class jupyter_server.base.handlers.FilesRedirectHandler(application, request, **kwargs)#

Bases: JupyterHandler

Handler for redirecting relative URLs to the /files/ handler

get(path='')#
Return type:

None

async static redirect_to_files(self, path)#

make redirect logic a reusable static method

so it can be called from other handlers.

Return type:

None

class jupyter_server.base.handlers.JupyterHandler(application, request, **kwargs)#

Bases: AuthenticatedHandler

Jupyter-specific extensions to authenticated handling

Mostly property shortcuts to Jupyter-specific settings.

property allow_credentials: bool#

Whether to set Access-Control-Allow-Credentials

property allow_origin: str#

Normal Access-Control-Allow-Origin

property allow_origin_pat: str | None#

Regular expression version of allow_origin

check_host()#

Check the host header if remote access disallowed.

Returns True if the request should continue, False otherwise.

Return type:

bool

check_origin(origin_to_satisfy_tornado='')#

Check Origin for cross-site API requests, including websockets

Copied from WebSocket with changes: :rtype: bool

  • allow unspecified host/origin (e.g. scripts)

  • allow token-authenticated requests

check_referer()#

Check Referer for cross-site requests. Disables requests to certain endpoints with external or missing Referer. If set, allow_origin settings are applied to the Referer to whitelist specific cross-origin sites. Used on GET for api endpoints and /files/ to block cross-site inclusion (XSSI).

Return type:

bool

Bypass xsrf cookie checks when token-authenticated

Return type:

None

property config: dict[str, Any] | None#
property config_manager: ConfigManager#
property contents_js_source: str#
property contents_manager: ContentsManager#
property default_url: str#
property event_logger: EventLogger#
get_json_body()#

Return the body of the request as JSON data.

Return type:

dict[str, Any] | None

get_origin()#
Return type:

str | None

get_template(name)#

Return the jinja template object for a given name

property jinja_template_vars: dict[str, Any]#

User-supplied values to supply to jinja templates.

property kernel_manager: AsyncMappingKernelManager#
property kernel_spec_manager: KernelSpecManager#
property log: Logger#

use the Jupyter log by default, falling back on tornado’s logger

property mathjax_config: str#
property mathjax_url: str#
async prepare(*, _redirect_to_login=True)#

Prepare a response.

Return type:

Awaitable[None] | None

render_template(name, **ns)#

Render a template by name.

property serverapp: ServerApp | None#
property session_manager: SessionManager#
set_attachment_header(filename)#

Set Content-Disposition: attachment header

As a method to ensure handling of filename encoding

Return type:

None

set_cors_headers()#

Add CORS headers, if defined

Now that current_user is async (jupyter-server 2.0), must be called at the end of prepare(), instead of in set_default_headers.

Return type:

None

set_default_headers()#

Add CORS headers, if defined

Return type:

None

property template_namespace: dict[str, Any]#
property terminal_manager: TerminalManager#
property version_hash: str#

The version hash to use for cache hints for static files

write_error(status_code, **kwargs)#

render custom error pages

Return type:

None

property ws_url: str#
class jupyter_server.base.handlers.MainHandler(application, request, **kwargs)#

Bases: JupyterHandler

Simple handler for base_url.

get()#

Get the main template.

Return type:

None

post()#

Get the main template.

Return type:

None

put()#

Get the main template.

Return type:

None

class jupyter_server.base.handlers.PrometheusMetricsHandler(application, request, **kwargs)#

Bases: JupyterHandler

Return prometheus metrics for this server

get()#

Get prometheus metrics.

Return type:

None

class jupyter_server.base.handlers.PublicStaticFileHandler(application, request, **kwargs)#

Bases: StaticFileHandler

Same as web.StaticFileHandler, but decorated to acknowledge that auth is not required.

get(path, include_body=True)#
Return type:

Coroutine[Any, Any, None]

head(path)#
Return type:

Awaitable[None]

class jupyter_server.base.handlers.RedirectWithParams(application, request, **kwargs)#

Bases: RequestHandler

Same as web.RedirectHandler, but preserves URL parameters

get()#

Get a redirect.

Return type:

None

initialize(url, permanent=True)#

Initialize a redirect handler.

Return type:

None

class jupyter_server.base.handlers.Template404(application, request, **kwargs)#

Bases: JupyterHandler

Render our 404 template

async prepare()#

Prepare a 404 response.

Return type:

None

class jupyter_server.base.handlers.TrailingSlashHandler(application, request, **kwargs)#

Bases: RequestHandler

Simple redirect handler that strips trailing slashes

This should be the first, highest priority handler.

get()#

Handle trailing slashes in a get.

Return type:

None

post()#

Handle trailing slashes in a get.

Return type:

None

put()#

Handle trailing slashes in a get.

Return type:

None

jupyter_server.base.handlers.json_errors(method)#

Decorate methods with this to return GitHub style JSON errors.

This should be used on any JSON API on any handler method that can raise HTTPErrors.

This will grab the latest HTTPError exception using sys.exc_info and then: :rtype: Any

  1. Set the HTTP status code based on the HTTPError

  2. Create and return a JSON body with a message field describing the error in a human readable form.

jupyter_server.base.handlers.json_sys_info()#

Get sys info as json.

jupyter_server.base.handlers.log()#

Get the application log.

Return type:

Logger

Base websocket classes.

class jupyter_server.base.websocket.WebSocketMixin#

Bases: object

Mixin for common websocket options

check_origin(origin=None)#

Check Origin == Host or Access-Control-Allow-Origin.

Tornado >= 4 calls this method automatically, raising 403 if it returns False.

meaningless for websockets

last_ping = 0.0#
last_pong = 0.0#
on_pong(data)#

Handle a pong message.

open(*args, **kwargs)#

Open the websocket.

ping_callback = None#
property ping_interval#

The interval for websocket keep-alive pings.

Set ws_ping_interval = 0 to disable pings.

property ping_timeout#

If no ping is received in this many milliseconds, close the websocket connection (VPNs, etc. can fail to cleanly close ws connections). Default is max of 3 pings or 30 seconds.

prepare(*args, **kwargs)#

Handle a get request.

send_ping()#

send a ping to keep the websocket alive

stream: Optional[IOStream] = None#

This module is deprecated in Jupyter Server 2.0

Module contents#
jupyter_server.extension package#
Submodules#

An extension application.

class jupyter_server.extension.application.ExtensionApp(**kwargs)#

Bases: JupyterApp

Base class for configurable Jupyter Server Extension Applications.

ExtensionApp subclasses can be initialized two ways:

  • Extension is listed as a jpserver_extension, and ServerApp calls its load_jupyter_server_extension classmethod. This is the classic way of loading a server extension.

  • Extension is launched directly by calling its launch_instance class method. This method can be set as a entry_point in the extensions setup.py.

classes: ClassesType = [<class 'jupyter_server.serverapp.ServerApp'>]#
property config_file_paths#

Look on the same path as our parent for config files

current_activity()#

Return a list of activity happening in this extension.

default_url#

A trait for unicode strings.

extension_url = '/'#
file_url_prefix#

A trait for unicode strings.

classmethod get_extension_package()#

Get an extension package.

classmethod get_extension_point()#

Get an extension point.

handlers: List[tuple[t.Any, ...]]#

Handlers appended to the server.

initialize()#

Initialize the extension app. The corresponding server app and webapp should already be initialized by this step.

  • Appends Handlers to the ServerApp,

  • Passes config and settings from ExtensionApp to the Tornado web application

  • Points Tornado Webapp to templates and static assets.

initialize_handlers()#

Override this method to append handlers to a Jupyter Server.

classmethod initialize_server(argv=None, load_other_extensions=True, **kwargs)#

Creates an instance of ServerApp and explicitly sets this extension to enabled=True (i.e. superseding disabling found in other config from files).

The launch_instance method uses this method to initialize and start a server.

initialize_settings()#

Override this method to add handling of settings.

initialize_templates()#

Override this method to add handling of template files.

classmethod launch_instance(argv=None, **kwargs)#

Launch the extension like an application. Initializes+configs a stock server and appends the extension to the server. Then starts the server and routes to extension’s landing page.

classmethod load_classic_server_extension(serverapp)#

Enables extension to be loaded as classic Notebook (jupyter/notebook) extension.

load_other_extensions = True#
classmethod make_serverapp(**kwargs)#

Instantiate the ServerApp

Override to customize the ServerApp before it loads any configuration

Return type:

ServerApp

name: str | Unicode[str, str] = 'ExtensionApp'#
open_browser#

Whether to open in a browser after starting. The specific browser used is platform dependent and determined by the python standard library webbrowser module, unless it is overridden using the –browser (ServerApp.browser) configuration option.

serverapp: ServerApp | None#

A trait which allows any value.

serverapp_class#

alias of ServerApp

serverapp_config: dict[str, t.Any] = {}#
settings#

Settings that will passed to the server.

start()#

Start the underlying Jupyter server.

Server should be started after extension is initialized.

static_paths#

paths to search for serving static files.

This allows adding javascript/css to be available from the notebook server machine, or overriding individual files in the IPython

static_url_prefix#

Url where the static assets for the extension are served.

stop()#

Stop the underlying Jupyter server.

async stop_extension()#

Cleanup any resources managed by this extension.

template_paths#

Paths to search for serving jinja templates.

Can be used to override templates from notebook.templates.

class jupyter_server.extension.application.ExtensionAppJinjaMixin(*args, **kwargs)#

Bases: HasTraits

Use Jinja templates for HTML templates on top of an ExtensionApp.

jinja2_options#

Options to pass to the jinja2 environment for this

exception jupyter_server.extension.application.JupyterServerExtensionException#

Bases: Exception

Exception class for raising for Server extensions errors.

Extension config.

class jupyter_server.extension.config.ExtensionConfigManager(**kwargs)#

Bases: ConfigManager

A manager class to interface with Jupyter Server Extension config found in a config.d folder. It is assumed that all configuration files in this directory are JSON files.

disable(name)#

Disable an extension by name.

enable(name)#

Enable an extension by name.

enabled(name, section_name='jupyter_server_config', include_root=True)#

Is the extension enabled?

get_jpserver_extensions(section_name='jupyter_server_config')#

Return the jpserver_extensions field from all config files found.

An extension handler.

class jupyter_server.extension.handler.ExtensionHandlerJinjaMixin#

Bases: object

Mixin class for ExtensionApp handlers that use jinja templating for template rendering.

get_template(name)#

Return the jinja template object for a given name

Return type:

str

class jupyter_server.extension.handler.ExtensionHandlerMixin#

Bases: object

Base class for Jupyter server extension handlers.

Subclasses can serve static files behind a namespaced endpoint: “<base_url>/static/<name>/”

This allows multiple extensions to serve static files under their own namespace and avoid intercepting requests for other extensions.

property base_url: str#
property config: Config#
property extensionapp: ExtensionApp#
initialize(name, *args, **kwargs)#
Return type:

None

property log: Logger#
property server_config: Config#
property serverapp: ServerApp#
settings: dict[str, Any]#
property static_path: str#
static_url(path, include_host=None, **kwargs)#

Returns a static URL for the given relative static file path. This method requires you set the {name}_static_path setting in your extension (which specifies the root directory of your static files). This method returns a versioned url (by default appending ?v=<signature>), which allows the static files to be cached indefinitely. This can be disabled by passing include_version=False (in the default implementation; other static file implementations are not required to support this, but they may support other options). By default this method returns URLs relative to the current host, but if include_host is true the URL returned will be absolute. If this handler has an include_host attribute, that value will be used as the default for all static_url calls that do not pass include_host as a keyword argument.

Return type:

str

property static_url_prefix: str#

The extension manager.

class jupyter_server.extension.manager.ExtensionManager(**kwargs)#

Bases: LoggingConfigurable

High level interface for findind, validating, linking, loading, and managing Jupyter Server extensions.

Usage: m = ExtensionManager(config_manager=…)

add_extension(extension_name, enabled=False)#

Try to add extension to manager, return True if successful. Otherwise, return False.

any_activity()#

Check for any activity currently happening across all extension applications.

config_manager#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

property extension_apps#

Return mapping of extension names and sets of ExtensionApp objects.

property extension_points#

Return mapping of extension point names and ExtensionPoint objects.

extensions#

Dictionary with extension package names as keys and ExtensionPackage objects as values.

from_config_manager(config_manager)#

Add extensions found by an ExtensionConfigManager

from_jpserver_extensions(jpserver_extensions)#

Add extensions from ‘jpserver_extensions’-like dictionary.

Link all enabled extensions to an instance of ServerApp

Link an extension by name.

linked_extensions#

Dictionary with extension names as keys

values are True if the extension is linked, False if not.

load_all_extensions()#

Load all enabled extensions and append them to the parent ServerApp.

load_extension(name)#

Load an extension by name.

serverapp#

A trait which allows any value.

property sorted_extensions#

Returns an extensions dictionary, sorted alphabetically.

async stop_all_extensions()#

Call the shutdown hooks in all extensions.

async stop_extension(name, apps)#

Call the shutdown hooks in the specified apps.

class jupyter_server.extension.manager.ExtensionPackage(**kwargs: Any)#

Bases: LoggingConfigurable

An API for interfacing with a Jupyter Server extension package.

Usage:

ext_name = “my_extensions” extpkg = ExtensionPackage(name=ext_name)

enabled#

Whether the extension package is enabled.

extension_points#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

Link all extension points.

Link an extension point.

load_all_points(serverapp)#

Load all extension points.

load_point(point_name, serverapp)#

Load an extension point.

metadata#

Extension metadata loaded from the extension package.

module#

The module for this extension package. None if not enabled

name#

Name of the an importable Python package.

validate()#

Validate all extension points in this package.

version#

The version of this extension package, if it can be found. Otherwise, an empty string.

class jupyter_server.extension.manager.ExtensionPoint(*args, **kwargs)#

Bases: HasTraits

A simple API for connecting to a Jupyter Server extension point defined by metadata and importable from a Python package.

property app#

If the metadata includes an app field

property config#

Return any configuration provided by this extension point.

Link the extension to a Jupyter ServerApp object.

This looks for a _link_jupyter_server_extension function in the extension’s module or ExtensionApp class.

property linked#

Has this extension point been linked to the server.

Will pull from ExtensionApp’s trait, if this point is an instance of ExtensionApp.

load(serverapp)#

Load the extension in a Jupyter ServerApp object.

This looks for a _load_jupyter_server_extension function in the extension’s module or ExtensionApp class.

metadata#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

property module#

The imported module (using importlib.import_module)

property module_name#

Name of the Python package module where the extension’s _load_jupyter_server_extension can be found.

property name#

Name of the extension.

If it’s not provided in the metadata, name is set to the extensions’ module name.

validate()#

Check that both a linker and loader exists.

Utilities for installing extensions

exception jupyter_server.extension.serverextension.ArgumentConflict#

Bases: ValueError

class jupyter_server.extension.serverextension.BaseExtensionApp(**kwargs)#

Bases: JupyterApp

Base extension installer app

aliases: StrDict = {'config': 'JupyterApp.config_file', 'log-level': 'Application.log_level'}#
property config_dir: str#

A trait for unicode strings.

flags: StrDict = {'debug': ({'Application': {'log_level': 10}}, 'set log level to logging.DEBUG (maximize logging output)'), 'py': ({'BaseExtensionApp': {'python': True}}, 'Install from a Python package'), 'python': ({'BaseExtensionApp': {'python': True}}, 'Install from a Python package'), 'show-config': ({'Application': {'show_config': True}}, "Show the application's configuration (human-readable format)"), 'show-config-json': ({'Application': {'show_config_json': True}}, "Show the application's configuration (json format)"), 'sys-prefix': ({'BaseExtensionApp': {'sys_prefix': True}}, 'Use sys.prefix as the prefix for installing extensions (for environments, packaging)'), 'system': ({'BaseExtensionApp': {'sys_prefix': False, 'user': False}}, 'Apply the operation system-wide'), 'user': ({'BaseExtensionApp': {'user': True}}, 'Apply the operation only for the given user')}#
python#

Install from a Python package

sys_prefix#

Use the sys.prefix as the prefix

user#

Whether to do a user install

version: str | Unicode[str, str | bytes] = '2.14.0'#
class jupyter_server.extension.serverextension.DisableServerExtensionApp(**kwargs)#

Bases: ToggleServerExtensionApp

An App that disables Server Extensions

description: str | Unicode[str, str | bytes] = '\n    Disable a server extension in configuration.\n\n    Usage\n        jupyter server extension disable [--system|--sys-prefix]\n    '#
name: str | Unicode[str, str | bytes] = 'jupyter server extension disable'#
class jupyter_server.extension.serverextension.EnableServerExtensionApp(**kwargs)#

Bases: ToggleServerExtensionApp

An App that enables (and validates) Server Extensions

description: str | Unicode[str, str | bytes] = '\n    Enable a server extension in configuration.\n\n    Usage\n        jupyter server extension enable [--system|--sys-prefix]\n    '#
name: str | Unicode[str, str | bytes] = 'jupyter server extension enable'#
class jupyter_server.extension.serverextension.ListServerExtensionsApp(**kwargs)#

Bases: BaseExtensionApp

An App that lists (and validates) Server Extensions

description: str | Unicode[str, str | bytes] = 'List all server extensions known by the configuration system'#
list_server_extensions()#

List all enabled and disabled server extensions, by config path

Enabled extensions are validated, potentially generating warnings.

Return type:

None

name: str | Unicode[str, str | bytes] = 'jupyter server extension list'#
start()#

Perform the App’s actions as configured

Return type:

None

version: str | Unicode[str, str | bytes] = '2.14.0'#
class jupyter_server.extension.serverextension.ServerExtensionApp(**kwargs)#

Bases: BaseExtensionApp

Root level server extension app

description: str = 'Work with Jupyter server extensions'#
examples: str | Unicode[str, str | bytes] = '\njupyter server extension list                        # list all configured server extensions\njupyter server extension enable --py <packagename>   # enable all server extensions in a Python package\njupyter server extension disable --py <packagename>  # disable all server extensions in a Python package\n'#
name: str | Unicode[str, str | bytes] = 'jupyter server extension'#
start()#

Perform the App’s actions as configured

Return type:

None

subcommands: dict[str, t.Any] = {'disable': (<class 'jupyter_server.extension.serverextension.DisableServerExtensionApp'>, 'Disable a server extension'), 'enable': (<class 'jupyter_server.extension.serverextension.EnableServerExtensionApp'>, 'Enable a server extension'), 'list': (<class 'jupyter_server.extension.serverextension.ListServerExtensionsApp'>, 'List server extensions')}#
version: str | Unicode[str, str | bytes] = '2.14.0'#
class jupyter_server.extension.serverextension.ToggleServerExtensionApp(**kwargs)#

Bases: BaseExtensionApp

A base class for enabling/disabling extensions

description: str | Unicode[str, str | bytes] = 'Enable/disable a server extension using frontend configuration files.'#
flags: StrDict = {'debug': ({'Application': {'log_level': 10}}, 'set log level to logging.DEBUG (maximize logging output)'), 'py': ({'ToggleServerExtensionApp': {'python': True}}, 'Install from a Python package'), 'python': ({'ToggleServerExtensionApp': {'python': True}}, 'Install from a Python package'), 'show-config': ({'Application': {'show_config': True}}, "Show the application's configuration (human-readable format)"), 'show-config-json': ({'Application': {'show_config_json': True}}, "Show the application's configuration (json format)"), 'sys-prefix': ({'ToggleServerExtensionApp': {'sys_prefix': True}}, 'Use sys.prefix as the prefix for installing server extensions'), 'system': ({'ToggleServerExtensionApp': {'sys_prefix': False, 'user': False}}, 'Perform the operation system-wide'), 'user': ({'ToggleServerExtensionApp': {'user': True}}, 'Perform the operation for the current user')}#
name: str | Unicode[str, str | bytes] = 'jupyter server extension enable/disable'#
start()#

Perform the App’s actions as configured

Return type:

None

toggle_server_extension(import_name)#

Change the status of a named server extension.

Uses the value of self._toggle_value.

Parameters:

import_name (str) – Importable Python module (dotted-notation) exposing the magic-named load_jupyter_server_extension function

Return type:

None

jupyter_server.extension.serverextension.toggle_server_extension_python(import_name, enabled=None, parent=None, user=False, sys_prefix=True)#

Toggle the boolean setting for a given server extension in a Jupyter config file.

Return type:

None

Extension utilities.

exception jupyter_server.extension.utils.ExtensionLoadingError#

Bases: Exception

An extension loading error.

exception jupyter_server.extension.utils.ExtensionMetadataError#

Bases: Exception

An extension metadata error.

exception jupyter_server.extension.utils.ExtensionModuleNotFound#

Bases: Exception

An extension module not found error.

exception jupyter_server.extension.utils.NotAnExtensionApp#

Bases: Exception

An error raised when a module is not an extension.

jupyter_server.extension.utils.get_loader(obj, logger=None)#

Looks for _load_jupyter_server_extension as an attribute of the object or module.

Adds backwards compatibility for old function name missing the underscore prefix.

jupyter_server.extension.utils.get_metadata(package_name, logger=None)#

Find the extension metadata from an extension package.

This looks for a _jupyter_server_extension_points function that returns metadata about all extension points within a Jupyter Server Extension package.

If it doesn’t exist, return a basic metadata packet given the module name.

jupyter_server.extension.utils.validate_extension(name)#

Raises an exception is the extension is missing a needed hook or metadata field. An extension is valid if: 1) name is an importable Python package. 1) the package has a _jupyter_server_extension_points function 2) each extension path has a _load_jupyter_server_extension function

If this works, nothing should happen.

Module contents#
jupyter_server.files package#
Submodules#

Serve files directly from the ContentsManager.

class jupyter_server.files.handlers.FilesHandler(application, request, **kwargs)#

Bases: JupyterHandler, StaticFileHandler

serve files via ContentsManager

Normally used when ContentsManager is not a FileContentsManager.

FileContentsManager subclasses use AuthenticatedFilesHandler by default, a subclass of StaticFileHandler.

auth_resource = 'contents'#
property content_security_policy#

The content security policy.

get(path, include_body=True)#

Get a file by path.

head(path)#

The head response.

Return type:

Awaitable[None] | None

Module contents#
jupyter_server.gateway package#
Submodules#

Gateway connection classes.

class jupyter_server.gateway.connections.GatewayWebSocketConnection(**kwargs)#

Bases: BaseKernelWebsocketConnection

Web socket connection that proxies to a kernel/enterprise gateway.

async connect()#

Connect to the socket.

disconnect()#

Handle a disconnect.

disconnected#

A boolean (True, False) trait.

handle_incoming_message(message)#

Send message to gateway server.

Return type:

None

handle_outgoing_message(incoming_msg, *args)#

Send message to the notebook client.

Return type:

None

kernel_ws_protocol#

A trait for unicode strings.

retry#

An int trait.

ws#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

ws_future#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

A kernel gateway client.

class jupyter_server.gateway.gateway_client.GatewayClient(**kwargs: Any)#

Bases: SingletonConfigurable

This class manages the configuration. It’s its own singleton class so that we can share these values across all objects. It also contains some options. helper methods to build request arguments out of the various config

KERNEL_LAUNCH_TIMEOUT = 40#
accept_cookies#

Accept and manage cookies sent by the service side. This is often useful for load balancers to decide which backend node to use. (JUPYTER_GATEWAY_ACCEPT_COOKIES env var)

accept_cookies_env = 'JUPYTER_GATEWAY_ACCEPT_COOKIES'#
accept_cookies_value = False#
allowed_envs#

A comma-separated list of environment variable names that will be included, along with their values, in the kernel startup request. The corresponding client_envs configuration value must also be set on the Gateway server - since that configuration value indicates which environmental values to make available to the kernel. (JUPYTER_GATEWAY_ALLOWED_ENVS env var)

allowed_envs_default_value = ''#
allowed_envs_env = 'JUPYTER_GATEWAY_ALLOWED_ENVS'#
auth_header_key#

The authorization header’s key name (typically ‘Authorization’) used in the HTTP headers. The header will be formatted as:

{'{auth_header_key}': '{auth_scheme} {auth_token}'}

If the authorization header key takes a single value, auth_scheme should be set to None and ‘auth_token’ should be configured to use the appropriate value.

(JUPYTER_GATEWAY_AUTH_HEADER_KEY env var)

auth_header_key_default_value = 'Authorization'#
auth_header_key_env = 'JUPYTER_GATEWAY_AUTH_HEADER_KEY'#
auth_scheme#

The auth scheme, added as a prefix to the authorization token used in the HTTP headers. (JUPYTER_GATEWAY_AUTH_SCHEME env var)

auth_scheme_default_value = 'token'#
auth_scheme_env = 'JUPYTER_GATEWAY_AUTH_SCHEME'#
auth_token#

The authorization token used in the HTTP headers. The header will be formatted as:

{'{auth_header_key}': '{auth_scheme} {auth_token}'}

(JUPYTER_GATEWAY_AUTH_TOKEN env var)
auth_token_default_value = ''#
auth_token_env = 'JUPYTER_GATEWAY_AUTH_TOKEN'#
ca_certs#

The filename of CA certificates or None to use defaults. (JUPYTER_GATEWAY_CA_CERTS env var)

ca_certs_env = 'JUPYTER_GATEWAY_CA_CERTS'#
client_cert#

The filename for client SSL certificate, if any. (JUPYTER_GATEWAY_CLIENT_CERT env var)

client_cert_env = 'JUPYTER_GATEWAY_CLIENT_CERT'#
client_key#

The filename for client SSL key, if any. (JUPYTER_GATEWAY_CLIENT_KEY env var)

client_key_env = 'JUPYTER_GATEWAY_CLIENT_KEY'#
connect_timeout#

The time allowed for HTTP connection establishment with the Gateway server. (JUPYTER_GATEWAY_CONNECT_TIMEOUT env var)

connect_timeout_default_value = 40.0#
connect_timeout_env = 'JUPYTER_GATEWAY_CONNECT_TIMEOUT'#
emit(data)#

Emit event using the core event schema from Jupyter Server’s Gateway Client.

env_whitelist#

Deprecated, use GatewayClient.allowed_envs

event_logger#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

event_schema_id = 'https://events.jupyter.org/jupyter_server/gateway_client/v1'#
property gateway_enabled#
gateway_retry_interval#

The time allowed for HTTP reconnection with the Gateway server for the first time. Next will be JUPYTER_GATEWAY_RETRY_INTERVAL multiplied by two in factor of numbers of retries but less than JUPYTER_GATEWAY_RETRY_INTERVAL_MAX. (JUPYTER_GATEWAY_RETRY_INTERVAL env var)

gateway_retry_interval_default_value = 1.0#
gateway_retry_interval_env = 'JUPYTER_GATEWAY_RETRY_INTERVAL'#
gateway_retry_interval_max#

The maximum time allowed for HTTP reconnection retry with the Gateway server. (JUPYTER_GATEWAY_RETRY_INTERVAL_MAX env var)

gateway_retry_interval_max_default_value = 30.0#
gateway_retry_interval_max_env = 'JUPYTER_GATEWAY_RETRY_INTERVAL_MAX'#
gateway_retry_max#

The maximum retries allowed for HTTP reconnection with the Gateway server. (JUPYTER_GATEWAY_RETRY_MAX env var)

gateway_retry_max_default_value = 5#
gateway_retry_max_env = 'JUPYTER_GATEWAY_RETRY_MAX'#
gateway_token_renewer: GatewayTokenRenewerBase#
gateway_token_renewer_class#

The class to use for Gateway token renewal. (JUPYTER_GATEWAY_TOKEN_RENEWER_CLASS env var)

gateway_token_renewer_class_default_value = 'jupyter_server.gateway.gateway_client.NoOpTokenRenewer'#
gateway_token_renewer_class_env = 'JUPYTER_GATEWAY_TOKEN_RENEWER_CLASS'#
headers#

Additional HTTP headers to pass on the request. This value will be converted to a dict. (JUPYTER_GATEWAY_HEADERS env var)

headers_default_value = '{}'#
headers_env = 'JUPYTER_GATEWAY_HEADERS'#
http_pwd#

The password for HTTP authentication. (JUPYTER_GATEWAY_HTTP_PWD env var)

http_pwd_env = 'JUPYTER_GATEWAY_HTTP_PWD'#
http_user#

The username for HTTP authentication. (JUPYTER_GATEWAY_HTTP_USER env var)

http_user_env = 'JUPYTER_GATEWAY_HTTP_USER'#
init_connection_args()#

Initialize arguments used on every request. Since these are primarily static values, we’ll perform this operation once.

kernels_endpoint#

The gateway API endpoint for accessing kernel resources (JUPYTER_GATEWAY_KERNELS_ENDPOINT env var)

kernels_endpoint_default_value = '/api/kernels'#
kernels_endpoint_env = 'JUPYTER_GATEWAY_KERNELS_ENDPOINT'#
kernelspecs_endpoint#

The gateway API endpoint for accessing kernelspecs (JUPYTER_GATEWAY_KERNELSPECS_ENDPOINT env var)

kernelspecs_endpoint_default_value = '/api/kernelspecs'#
kernelspecs_endpoint_env = 'JUPYTER_GATEWAY_KERNELSPECS_ENDPOINT'#
kernelspecs_resource_endpoint#

The gateway endpoint for accessing kernelspecs resources (JUPYTER_GATEWAY_KERNELSPECS_RESOURCE_ENDPOINT env var)

kernelspecs_resource_endpoint_default_value = '/kernelspecs'#
kernelspecs_resource_endpoint_env = 'JUPYTER_GATEWAY_KERNELSPECS_RESOURCE_ENDPOINT'#
launch_timeout_pad#

Timeout pad to be ensured between KERNEL_LAUNCH_TIMEOUT and request_timeout such that request_timeout >= KERNEL_LAUNCH_TIMEOUT + launch_timeout_pad. (JUPYTER_GATEWAY_LAUNCH_TIMEOUT_PAD env var)

launch_timeout_pad_default_value = 2.0#
launch_timeout_pad_env = 'JUPYTER_GATEWAY_LAUNCH_TIMEOUT_PAD'#
load_connection_args(**kwargs)#

Merges the static args relative to the connection, with the given keyword arguments. If static args have yet to be initialized, we’ll do that here.

request_timeout#

The time allowed for HTTP request completion. (JUPYTER_GATEWAY_REQUEST_TIMEOUT env var)

request_timeout_default_value = 42.0#
request_timeout_env = 'JUPYTER_GATEWAY_REQUEST_TIMEOUT'#
update_cookies(cookie)#

Update cookies from existing requests for load balancers

Return type:

None

url#

The url of the Kernel or Enterprise Gateway server where kernel specifications are defined and kernel management takes place. If defined, this Notebook server acts as a proxy for all kernel management and kernel specification retrieval. (JUPYTER_GATEWAY_URL env var)

url_env = 'JUPYTER_GATEWAY_URL'#
validate_cert#

For HTTPS requests, determines if server’s certificate should be validated or not. (JUPYTER_GATEWAY_VALIDATE_CERT env var)

validate_cert_default_value = True#
validate_cert_env = 'JUPYTER_GATEWAY_VALIDATE_CERT'#
ws_url#

The websocket url of the Kernel or Enterprise Gateway server. If not provided, this value will correspond to the value of the Gateway url with ‘ws’ in place of ‘http’. (JUPYTER_GATEWAY_WS_URL env var)

ws_url_env = 'JUPYTER_GATEWAY_WS_URL'#
class jupyter_server.gateway.gateway_client.GatewayTokenRenewerBase(**kwargs)#

Bases: ABC, LoggingConfigurable

Abstract base class for refreshing tokens used between this server and a Gateway server. Implementations requiring additional configuration can extend their class with appropriate configuration values or convey those values via appropriate environment variables relative to the implementation.

abstract get_token(auth_header_key, auth_scheme, auth_token, **kwargs)#

Given the current authorization header key, scheme, and token, this method returns a (potentially renewed) token for use against the Gateway server.

Return type:

str

class jupyter_server.gateway.gateway_client.GatewayTokenRenewerMeta(name, bases, classdict, **kwds)#

Bases: ABCMeta, MetaHasTraits

The metaclass necessary for proper ABC behavior in a Configurable.

class jupyter_server.gateway.gateway_client.NoOpTokenRenewer(**kwargs)#

Bases: GatewayTokenRenewerBase

NoOpTokenRenewer is the default value to the GatewayClient trait gateway_token_renewer and merely returns the provided token.

get_token(auth_header_key, auth_scheme, auth_token, **kwargs)#

This implementation simply returns the current authorization token.

Return type:

str

class jupyter_server.gateway.gateway_client.RetryableHTTPClient#

Bases: object

Inspired by urllib.util.Retry (https://urllib3.readthedocs.io/en/stable/reference/urllib3.util.html), this class is initialized with desired retry characteristics, uses a recursive method fetch() against an instance of AsyncHTTPClient which tracks the current retry count across applicable request retries.

MAX_RETRIES_CAP = 10#
MAX_RETRIES_DEFAULT = 2#
backoff_factor: float = 0.1#
async fetch(endpoint, **kwargs)#

Retryable AsyncHTTPClient.fetch() method. When the request fails, this method will recurse up to max_retries times if the condition deserves a retry.

Return type:

HTTPResponse

max_retries: int = 2#
retried_errors: set[int] = {502, 503, 504, 599}#
retried_exceptions: set[type] = {<class 'ConnectionError'>}#
retried_methods: set[str] = {'DELETE', 'GET'}#
async jupyter_server.gateway.gateway_client.gateway_request(endpoint, **kwargs)#

Make an async request to kernel gateway endpoint, returns a response

Return type:

HTTPResponse

Gateway API handlers.

class jupyter_server.gateway.handlers.GatewayResourceHandler(application, request, **kwargs)#

Bases: APIHandler

Retrieves resources for specific kernelspec definitions from kernel/enterprise gateway.

get(kernel_name, path, include_body=True)#

Get a gateway resource by name and path.

class jupyter_server.gateway.handlers.GatewayWebSocketClient(**kwargs: Any)#

Bases: LoggingConfigurable

Proxy web socket connection to a kernel/enterprise gateway.

on_close()#

Web socket closed event.

on_message(message)#

Send message to gateway server.

on_open(kernel_id, message_callback, **kwargs)#

Web socket connection open against gateway server.

class jupyter_server.gateway.handlers.WebSocketChannelsHandler(application, request, **kwargs)#

Bases: WebSocketHandler, JupyterHandler

Gateway web socket channels handler.

authenticate()#

Run before finishing the GET request

Extend this method to add logic that should fire before the websocket finishes completing.

check_origin(origin=None)#

Check origin for the socket.

gateway = None#
async get(kernel_id, *args, **kwargs)#

Get the socket.

get_compression_options()#

Get the compression options for the socket.

initialize()#

Initialize the socket.

kernel_id = None#
on_close()#

Handle a closing socket.

on_message(message)#

Forward message to gateway web socket handler.

open(kernel_id, *args, **kwargs)#

Handle web socket connection open to notebook server and delegate to gateway web socket handler

ping_callback = None#
send_ping()#

Send a ping to the socket.

session = None#
set_default_headers()#

Undo the set_default_headers in JupyterHandler which doesn’t make sense for websockets

write_message(message, binary=False)#

Send message back to notebook client. This is called via callback from self.gateway._read_messages.

Kernel gateway managers.

class jupyter_server.gateway.managers.ChannelQueue(channel_name, channel_socket, log)#

Bases: Queue

A queue for a named channel.

channel_name: Optional[str] = None#
async get_msg(*args, **kwargs)#

Get a message from the queue.

Return type:

dict[str, Any]

is_alive()#

Whether the queue is alive.

Return type:

bool

response_router_finished: bool#
send(msg)#

Send a message to the queue.

Return type:

None

static serialize_datetime(dt)#

Serialize a datetime object.

start()#

Start the queue.

Return type:

None

stop()#

Stop the queue.

Return type:

None

class jupyter_server.gateway.managers.GatewayKernelClient(**kwargs: Any)#

Bases: AsyncKernelClient

Communicates with a single kernel indirectly via a websocket to a gateway server.

There are five channels associated with each kernel:

  • shell: for request/reply calls to the kernel.

  • iopub: for the kernel to publish results to frontends.

  • hb: for monitoring the kernel’s heartbeat.

  • stdin: for frontends to reply to raw_input calls in the kernel.

  • control: for kernel management calls to the kernel.

The messages that can be sent on these channels are exposed as methods of the client (KernelClient.execute, complete, history, etc.). These methods only send the message, they don’t wait for a reply. To get results, use e.g. get_shell_msg() to fetch messages from the shell channel.

allow_stdin: bool = False#
property control_channel#

Get the control channel object for this kernel.

property hb_channel#

Get the hb channel object for this kernel.

property iopub_channel#

Get the iopub channel object for this kernel.

property shell_channel#

Get the shell channel object for this kernel.

async start_channels(shell=True, iopub=True, stdin=True, hb=True, control=True)#

Starts the channels for this kernel.

For this class, we establish a websocket connection to the destination and set up the channel-based queues on which applicable messages will be posted.

property stdin_channel#

Get the stdin channel object for this kernel.

stop_channels()#

Stops all the running channels for this kernel.

For this class, we close the websocket connection and destroy the channel-based queues.

class jupyter_server.gateway.managers.GatewayKernelManager(**kwargs: Any)#

Bases: ServerKernelManager

Manages a single kernel remotely via a Gateway Server.

cleanup_resources(restart=False)#

Clean up resources when the kernel is shut down

client(**kwargs)#

Create a client configured to connect to our kernel

client_class: DottedObjectName#

A string holding a valid dotted object name in Python, such as A.b3._c

client_factory: Type#

A trait whose value must be a subclass of a specified class.

property has_kernel#

Has a kernel been started that we are managing.

async interrupt_kernel()#

Interrupts the kernel via an HTTP request.

async is_alive()#

Is the kernel process still running?

kernel = None#
kernel_id: Optional[str] = None#
async refresh_model(model=None)#

Refresh the kernel model.

Parameters:

model (dict) – The model from which to refresh the kernel. If None, the kernel model is fetched from the Gateway server.

async restart_kernel(**kw)#

Restarts a kernel via HTTP.

async shutdown_kernel(now=False, restart=False)#

Attempts to stop the kernel process cleanly via HTTP.

async start_kernel(**kwargs)#

Starts a kernel via HTTP in an asynchronous manner.

Parameters:

**kwargs (optional) – keyword arguments that are passed down to build the kernel_cmd and launching the kernel (e.g. Popen kwargs).

class jupyter_server.gateway.managers.GatewayKernelSpecManager(**kwargs: Any)#

Bases: KernelSpecManager

A gateway kernel spec manager.

async get_all_specs()#

Get all of the kernel specs for the gateway.

async get_kernel_spec(kernel_name, **kwargs)#

Get kernel spec for kernel_name.

Parameters:

kernel_name (str) – The name of the kernel.

async get_kernel_spec_resource(kernel_name, path)#

Get kernel spec for kernel_name.

Parameters:
  • kernel_name (str) – The name of the kernel.

  • path (str) – The name of the desired resource

async list_kernel_specs()#

Get a list of kernel specs.

class jupyter_server.gateway.managers.GatewayMappingKernelManager(**kwargs: Any)#

Bases: AsyncMappingKernelManager

Kernel manager that supports remote kernels hosted by Jupyter Kernel or Enterprise Gateway.

async cull_kernels()#

Override cull_kernels, so we can be sure their state is current.

async interrupt_kernel(kernel_id, **kwargs)#

Interrupt a kernel by its kernel uuid.

Parameters:

kernel_id (uuid) – The id of the kernel to interrupt.

async kernel_model(kernel_id)#

Return a dictionary of kernel information described in the JSON standard model.

Parameters:

kernel_id (uuid) – The uuid of the kernel.

async list_kernels(**kwargs)#

Get a list of running kernels from the Gateway server.

We’ll use this opportunity to refresh the models in each of the kernels we’re managing.

remove_kernel(kernel_id)#

Complete override since we want to be more tolerant of missing keys

async restart_kernel(kernel_id, now=False, **kwargs)#

Restart a kernel by its kernel uuid.

Parameters:

kernel_id (uuid) – The id of the kernel to restart.

async shutdown_all(now=False)#

Shutdown all kernels.

async shutdown_kernel(kernel_id, now=False, restart=False)#

Shutdown a kernel by its kernel uuid.

Parameters:
  • kernel_id (uuid) – The id of the kernel to shutdown.

  • now (bool) – Shutdown the kernel immediately (True) or gracefully (False)

  • restart (bool) – The purpose of this shutdown is to restart the kernel (True)

async start_kernel(*, kernel_id=None, path=None, **kwargs)#

Start a kernel for a session and return its kernel_id.

Parameters:
  • kernel_id (uuid) – The uuid to associate the new kernel with. If this is not None, this kernel will be persistent whenever it is requested.

  • path (API path) – The API path (unicode, ‘/’ delimited) for the cwd. Will be transformed to an OS path relative to root_dir.

class jupyter_server.gateway.managers.GatewaySessionManager(**kwargs: Any)#

Bases: SessionManager

A gateway session manager.

async kernel_culled(kernel_id)#

Checks if the kernel is still considered alive and returns true if it’s not found.

Return type:

bool

kernel_manager#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

class jupyter_server.gateway.managers.HBChannelQueue(channel_name, channel_socket, log)#

Bases: ChannelQueue

A queue for the heartbeat channel.

is_beating()#

Whether the channel is beating.

Return type:

bool

response_router_finished: bool#
Module contents#
jupyter_server.i18n package#
Module contents#

Server functions for loading translations

jupyter_server.i18n.cached_load(language, domain='nbjs')#

Load translations for one language, using in-memory cache if available

jupyter_server.i18n.combine_translations(accept_language, domain='nbjs')#

Combine translations for multiple accepted languages.

Returns data re-packaged in jed1.x format.

jupyter_server.i18n.load(language, domain='nbjs')#

Load translations from an nbjs.json file

jupyter_server.i18n.parse_accept_lang_header(accept_lang)#

Parses the ‘Accept-Language’ HTTP header.

Returns a list of language codes in ascending order of preference (with the most preferred language last).

jupyter_server.kernelspecs package#
Submodules#

Kernelspecs API Handlers.

class jupyter_server.kernelspecs.handlers.KernelSpecResourceHandler(application, request, **kwargs)#

Bases: StaticFileHandler, JupyterHandler

A Kernelspec resource handler.

SUPPORTED_METHODS = ('GET', 'HEAD')#
auth_resource = 'kernelspecs'#
get(kernel_name, path, include_body=True)#

Get a kernelspec resource.

head(kernel_name, path)#

Get the head info for a kernel resource.

initialize()#

Initialize a kernelspec resource handler.

Module contents#
jupyter_server.nbconvert package#
Submodules#

Tornado handlers for nbconvert.

class jupyter_server.nbconvert.handlers.NbconvertFileHandler(application, request, **kwargs)#

Bases: JupyterHandler

An nbconvert file handler.

SUPPORTED_METHODS = ('GET',)#
auth_resource = 'nbconvert'#
get(format, path)#

Get a notebook file in a desired format.

class jupyter_server.nbconvert.handlers.NbconvertPostHandler(application, request, **kwargs)#

Bases: JupyterHandler

An nbconvert post handler.

SUPPORTED_METHODS = ('POST',)#
auth_resource = 'nbconvert'#
post(format)#

Convert a notebook file to a desired format.

jupyter_server.nbconvert.handlers.find_resource_files(output_files_dir)#

Find the resource files in a directory.

jupyter_server.nbconvert.handlers.get_exporter(format, **kwargs)#

get an exporter, raising appropriate errors

jupyter_server.nbconvert.handlers.respond_zip(handler, name, output, resources)#

Zip up the output and resource files and respond with the zip file.

Returns True if it has served a zip file, False if there are no resource files, in which case we serve the plain output file.

Module contents#
jupyter_server.prometheus package#
Submodules#

Log functions for prometheus

jupyter_server.prometheus.log_functions.prometheus_log_method(handler)#

Tornado log handler for recording RED metrics.

We record the following metrics:

Rate - the number of requests, per second, your services are serving. Errors - the number of failed requests per second. Duration - The amount of time each request takes expressed as a time interval.

We use a fully qualified name of the handler as a label, rather than every url path to reduce cardinality.

This function should be either the value of or called from a function that is the ‘log_function’ tornado setting. This makes it get called at the end of every request, allowing us to record the metrics we need.

Prometheus metrics exported by Jupyter Server

Read https://prometheus.io/docs/practices/naming/ for naming conventions for metrics & labels.

Module contents#
jupyter_server.services package#
Subpackages#
jupyter_server.services.api package#
Submodules#

Tornado handlers for api specifications.

class jupyter_server.services.api.handlers.APISpecHandler(application, request, **kwargs)#

Bases: StaticFileHandler, JupyterHandler

A spec handler for the REST API.

auth_resource = 'api'#
get()#

Get the API spec.

get_content_type()#

Get the content type.

head()#
initialize()#

Initialize the API spec handler.

class jupyter_server.services.api.handlers.APIStatusHandler(application, request, **kwargs)#

Bases: APIHandler

An API status handler.

auth_resource = 'api'#
get()#

Get the API status.

class jupyter_server.services.api.handlers.IdentityHandler(application, request, **kwargs)#

Bases: APIHandler

Get the current user’s identity model

get()#

Get the identity model.

Module contents#
jupyter_server.services.config package#
Submodules#

Tornado handlers for frontend config storage.

class jupyter_server.services.config.handlers.ConfigHandler(application, request, **kwargs)#

Bases: APIHandler

A config API handler.

auth_resource = 'config'#
get(section_name)#

Get config by section name.

patch(section_name)#

Update a config section by name.

put(section_name)#

Set a config section by name.

Manager to read and modify frontend config data in JSON files.

class jupyter_server.services.config.manager.ConfigManager(**kwargs)#

Bases: LoggingConfigurable

Config Manager used for storing frontend config

config_dir_name#

Name of the config directory.

get(section_name)#

Get the config from all config sections.

read_config_path#

An instance of a Python list.

set(section_name, data)#

Set the config only to the user’s config.

update(section_name, new_data)#

Update the config only to the user’s config.

write_config_dir#

A trait for unicode strings.

write_config_manager#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

Module contents#
class jupyter_server.services.config.ConfigManager(**kwargs)#

Bases: LoggingConfigurable

Config Manager used for storing frontend config

config_dir_name#

Name of the config directory.

get(section_name)#

Get the config from all config sections.

read_config_path#

An instance of a Python list.

set(section_name, data)#

Set the config only to the user’s config.

update(section_name, new_data)#

Update the config only to the user’s config.

write_config_dir#

A trait for unicode strings.

write_config_manager#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

jupyter_server.services.contents package#
Submodules#

Classes for managing Checkpoints.

class jupyter_server.services.contents.checkpoints.AsyncCheckpoints(**kwargs)#

Bases: Checkpoints

Base class for managing checkpoints for a ContentsManager asynchronously.

async create_checkpoint(contents_mgr, path)#

Create a checkpoint.

async delete_all_checkpoints(path)#

Delete all checkpoints for the given path.

async delete_checkpoint(checkpoint_id, path)#

delete a checkpoint for a file

async list_checkpoints(path)#

Return a list of checkpoints for a given file

async rename_all_checkpoints(old_path, new_path)#

Rename all checkpoints for old_path to new_path.

async rename_checkpoint(checkpoint_id, old_path, new_path)#

Rename a single checkpoint from old_path to new_path.

async restore_checkpoint(contents_mgr, checkpoint_id, path)#

Restore a checkpoint

class jupyter_server.services.contents.checkpoints.AsyncGenericCheckpointsMixin#

Bases: GenericCheckpointsMixin

Helper for creating Asynchronous Checkpoints subclasses that can be used with any ContentsManager.

async create_checkpoint(contents_mgr, path)#
async create_file_checkpoint(content, format, path)#

Create a checkpoint of the current state of a file

Returns a checkpoint model for the new checkpoint.

async create_notebook_checkpoint(nb, path)#

Create a checkpoint of the current state of a file

Returns a checkpoint model for the new checkpoint.

async get_file_checkpoint(checkpoint_id, path)#

Get the content of a checkpoint for a non-notebook file.

Returns a dict of the form:

{
    'type': 'file',
    'content': <str>,
    'format': {'text','base64'},
}
async get_notebook_checkpoint(checkpoint_id, path)#

Get the content of a checkpoint for a notebook.

Returns a dict of the form:

{
    'type': 'notebook',
    'content': <output of nbformat.read>,
}
async restore_checkpoint(contents_mgr, checkpoint_id, path)#

Restore a checkpoint.

class jupyter_server.services.contents.checkpoints.Checkpoints(**kwargs)#

Bases: LoggingConfigurable

Base class for managing checkpoints for a ContentsManager.

Subclasses are required to implement:

create_checkpoint(self, contents_mgr, path) restore_checkpoint(self, contents_mgr, checkpoint_id, path) rename_checkpoint(self, checkpoint_id, old_path, new_path) delete_checkpoint(self, checkpoint_id, path) list_checkpoints(self, path)

create_checkpoint(contents_mgr, path)#

Create a checkpoint.

delete_all_checkpoints(path)#

Delete all checkpoints for the given path.

delete_checkpoint(checkpoint_id, path)#

delete a checkpoint for a file

list_checkpoints(path)#

Return a list of checkpoints for a given file

rename_all_checkpoints(old_path, new_path)#

Rename all checkpoints for old_path to new_path.

rename_checkpoint(checkpoint_id, old_path, new_path)#

Rename a single checkpoint from old_path to new_path.

restore_checkpoint(contents_mgr, checkpoint_id, path)#

Restore a checkpoint

class jupyter_server.services.contents.checkpoints.GenericCheckpointsMixin#

Bases: object

Helper for creating Checkpoints subclasses that can be used with any ContentsManager.

Provides a ContentsManager-agnostic implementation of create_checkpoint and restore_checkpoint in terms of the following operations:

  • create_file_checkpoint(self, content, format, path)

  • create_notebook_checkpoint(self, nb, path)

  • get_file_checkpoint(self, checkpoint_id, path)

  • get_notebook_checkpoint(self, checkpoint_id, path)

To create a generic CheckpointManager, add this mixin to a class that implement the above four methods plus the remaining Checkpoints API methods:

  • delete_checkpoint(self, checkpoint_id, path)

  • list_checkpoints(self, path)

  • rename_checkpoint(self, checkpoint_id, old_path, new_path)

create_checkpoint(contents_mgr, path)#
create_file_checkpoint(content, format, path)#

Create a checkpoint of the current state of a file

Returns a checkpoint model for the new checkpoint.

create_notebook_checkpoint(nb, path)#

Create a checkpoint of the current state of a file

Returns a checkpoint model for the new checkpoint.

get_file_checkpoint(checkpoint_id, path)#

Get the content of a checkpoint for a non-notebook file.

Returns a dict of the form:

{
    'type': 'file',
    'content': <str>,
    'format': {'text','base64'},
}
get_notebook_checkpoint(checkpoint_id, path)#

Get the content of a checkpoint for a notebook.

Returns a dict of the form:

{
    'type': 'notebook',
    'content': <output of nbformat.read>,
}
restore_checkpoint(contents_mgr, checkpoint_id, path)#

Restore a checkpoint.

File-based Checkpoints implementations.

class jupyter_server.services.contents.filecheckpoints.AsyncFileCheckpoints(**kwargs)#

Bases: FileCheckpoints, AsyncFileManagerMixin, AsyncCheckpoints

async checkpoint_model(checkpoint_id, os_path)#

construct the info dict for a given checkpoint

async create_checkpoint(contents_mgr, path)#

Create a checkpoint.

async delete_checkpoint(checkpoint_id, path)#

delete a file’s checkpoint

async list_checkpoints(path)#

list the checkpoints for a given file

This contents manager currently only supports one checkpoint per file.

async rename_checkpoint(checkpoint_id, old_path, new_path)#

Rename a checkpoint from old_path to new_path.

async restore_checkpoint(contents_mgr, checkpoint_id, path)#

Restore a checkpoint.

class jupyter_server.services.contents.filecheckpoints.AsyncGenericFileCheckpoints(**kwargs)#

Bases: AsyncGenericCheckpointsMixin, AsyncFileCheckpoints

Asynchronous Local filesystem Checkpoints that works with any conforming ContentsManager.

async create_file_checkpoint(content, format, path)#

Create a checkpoint from the current content of a file.

async create_notebook_checkpoint(nb, path)#

Create a checkpoint from the current content of a notebook.

async get_file_checkpoint(checkpoint_id, path)#

Get a checkpoint for a file.

async get_notebook_checkpoint(checkpoint_id, path)#

Get a checkpoint for a notebook.

class jupyter_server.services.contents.filecheckpoints.FileCheckpoints(**kwargs)#

Bases: FileManagerMixin, Checkpoints

A Checkpoints that caches checkpoints for files in adjacent directories.

Only works with FileContentsManager. Use GenericFileCheckpoints if you want file-based checkpoints with another ContentsManager.

checkpoint_dir#

The directory name in which to keep file checkpoints

This is a path relative to the file’s own directory.

By default, it is .ipynb_checkpoints

checkpoint_model(checkpoint_id, os_path)#

construct the info dict for a given checkpoint

checkpoint_path(checkpoint_id, path)#

find the path to a checkpoint

create_checkpoint(contents_mgr, path)#

Create a checkpoint.

delete_checkpoint(checkpoint_id, path)#

delete a file’s checkpoint

list_checkpoints(path)#

list the checkpoints for a given file

This contents manager currently only supports one checkpoint per file.

no_such_checkpoint(path, checkpoint_id)#
rename_checkpoint(checkpoint_id, old_path, new_path)#

Rename a checkpoint from old_path to new_path.

restore_checkpoint(contents_mgr, checkpoint_id, path)#

Restore a checkpoint.

root_dir#

A trait for unicode strings.

class jupyter_server.services.contents.filecheckpoints.GenericFileCheckpoints(**kwargs)#

Bases: GenericCheckpointsMixin, FileCheckpoints

Local filesystem Checkpoints that works with any conforming ContentsManager.

create_file_checkpoint(content, format, path)#

Create a checkpoint from the current content of a file.

create_notebook_checkpoint(nb, path)#

Create a checkpoint from the current content of a notebook.

get_file_checkpoint(checkpoint_id, path)#

Get a checkpoint for a file.

get_notebook_checkpoint(checkpoint_id, path)#

Get a checkpoint for a notebook.

Utilities for file-based Contents/Checkpoints managers.

class jupyter_server.services.contents.fileio.AsyncFileManagerMixin(**kwargs)#

Bases: FileManagerMixin

Mixin for ContentsAPI classes that interact with the filesystem asynchronously.

class jupyter_server.services.contents.fileio.FileManagerMixin(**kwargs)#

Bases: LoggingConfigurable, Configurable

Mixin for ContentsAPI classes that interact with the filesystem.

Provides facilities for reading, writing, and copying files.

Shared by FileContentsManager and FileCheckpoints.

Note

Classes using this mixin must provide the following attributes:

root_dirunicode

A directory against against which API-style paths are to be resolved.

log : logging.Logger

atomic_writing(os_path, *args, **kwargs)#

wrapper around atomic_writing that turns permission errors to 403. Depending on flag ‘use_atomic_writing’, the wrapper perform an actual atomic writing or simply writes the file (whatever an old exists or not)

hash_algorithm#

Hash algorithm to use for file content, support by hashlib

open(os_path, *args, **kwargs)#

wrapper around io.open that turns permission errors into 403

perm_to_403(os_path='')#

context manager for turning permission errors into 403.

use_atomic_writing#

By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones. This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )

async jupyter_server.services.contents.fileio.async_copy2_safe(src, dst, log=None)#

copy src to dst asynchronously

like shutil.copy2, but log errors in copystat instead of raising

async jupyter_server.services.contents.fileio.async_replace_file(src, dst)#

replace dst with src asynchronously

jupyter_server.services.contents.fileio.atomic_writing(path, text=True, encoding='utf-8', log=None, **kwargs)#

Context manager to write to a file only if the entire write is successful.

This works by copying the previous file contents to a temporary file in the same directory, and renaming that file back to the target if the context exits with an error. If the context is successful, the new data is synced to disk and the temporary file is removed.

Parameters:
  • path (str) – The target file to write to.

  • text (bool, optional) – Whether to open the file in text mode (i.e. to write unicode). Default is True.

  • encoding (str, optional) – The encoding to use for files opened in text mode. Default is UTF-8.

  • **kwargs – Passed to io.open().

jupyter_server.services.contents.fileio.copy2_safe(src, dst, log=None)#

copy src to dst

like shutil.copy2, but log errors in copystat instead of raising

jupyter_server.services.contents.fileio.path_to_intermediate(path)#

Name of the intermediate file used in atomic writes.

The .~ prefix will make Dropbox ignore the temporary file.

jupyter_server.services.contents.fileio.path_to_invalid(path)#

Name of invalid file after a failed atomic write and subsequent read.

jupyter_server.services.contents.fileio.replace_file(src, dst)#

replace dst with src

A contents manager that uses the local file system for storage.

class jupyter_server.services.contents.filemanager.AsyncFileContentsManager(**kwargs)#

Bases: FileContentsManager, AsyncFileManagerMixin, AsyncContentsManager

An async file contents manager.

async check_folder_size(path)#

limit the size of folders being copied to be no more than the trait max_copy_folder_size_mb to prevent a timeout error

Return type:

None

async copy(from_path, to_path=None)#

Copy an existing file or directory and return its new model. If to_path not specified, it will be the parent directory of from_path. If copying a file and to_path is a directory, filename/directoryname will increment from_path-Copy#.ext. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions except ipynb. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot. from_path must be a full path to a file or directory.

async delete_file(path)#

Delete file at path.

async dir_exists(path)#

Does a directory exist at the given path

async file_exists(path)#

Does a file exist at the given path

async get(path, content=True, type=None, format=None, require_hash=False)#

Takes a path for an entity and returns its model

Parameters:
  • path (str) – the API path that describes the relative path for the target

  • content (bool) – Whether to include the contents in the reply

  • type (str, optional) – The requested type - ‘file’, ‘notebook’, or ‘directory’. Will raise HTTPError 400 if the content doesn’t match.

  • format (str, optional) – The requested format for file contents. ‘text’ or ‘base64’. Ignored if this returns a notebook or directory model.

  • require_hash (bool, optional) – Whether to include the hash of the file contents.

Returns:

model – the contents model. If content=True, returns the contents of the file or directory as well.

Return type:

dict

async get_kernel_path(path, model=None)#

Return the initial API path of a kernel associated with a given notebook

async is_hidden(path)#

Is path a hidden directory or file

async rename_file(old_path, new_path)#

Rename a file.

async save(model, path='')#

Save the file model and return the model with no content.

class jupyter_server.services.contents.filemanager.FileContentsManager(**kwargs)#

Bases: FileManagerMixin, ContentsManager

A file contents manager.

always_delete_dir#

If True, deleting a non-empty directory will always be allowed. WARNING this may result in files being permanently removed; e.g. on Windows, if the data size is too big for the trash/recycle bin the directory will be permanently deleted. If False (default), the non-empty directory will be sent to the trash only if safe. And if delete_to_trash is True, the directory won’t be deleted.

check_folder_size(path)#

limit the size of folders being copied to be no more than the trait max_copy_folder_size_mb to prevent a timeout error

copy(from_path, to_path=None)#

Copy an existing file or directory and return its new model. If to_path not specified, it will be the parent directory of from_path. If copying a file and to_path is a directory, filename/directoryname will increment from_path-Copy#.ext. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions except ipynb. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot. from_path must be a full path to a file or directory.

delete_file(path)#

Delete file at path.

delete_to_trash#

If True (default), deleting files will send them to the platform’s trash/recycle bin, where they can be recovered. If False, deleting files really deletes them.

dir_exists(path)#

Does the API-style path refer to an extant directory?

API-style wrapper for os.path.isdir

Parameters:

path (str) – The path to check. This is an API path (/ separated, relative to root_dir).

Returns:

exists – Whether the path is indeed a directory.

Return type:

bool

exists(path)#

Returns True if the path exists, else returns False.

API-style wrapper for os.path.exists

Parameters:

path (str) – The API path to the file (with ‘/’ as separator)

Returns:

exists – Whether the target exists.

Return type:

bool

file_exists(path)#

Returns True if the file exists, else returns False.

API-style wrapper for os.path.isfile

Parameters:

path (str) – The relative path to the file (with ‘/’ as separator)

Returns:

exists – Whether the file exists.

Return type:

bool

get(path, content=True, type=None, format=None, require_hash=False)#

Takes a path for an entity and returns its model

Parameters:
  • path (str) – the API path that describes the relative path for the target

  • content (bool) – Whether to include the contents in the reply

  • type (str, optional) – The requested type - ‘file’, ‘notebook’, or ‘directory’. Will raise HTTPError 400 if the content doesn’t match.

  • format (str, optional) – The requested format for file contents. ‘text’ or ‘base64’. Ignored if this returns a notebook or directory model.

  • require_hash (bool, optional) – Whether to include the hash of the file contents.

Returns:

model – the contents model. If content=True, returns the contents of the file or directory as well.

Return type:

dict

get_kernel_path(path, model=None)#

Return the initial API path of a kernel associated with a given notebook

info_string()#

Get the information string for the manager.

is_hidden(path)#

Does the API style path correspond to a hidden directory or file?

Parameters:

path (str) – The path to check. This is an API path (/ separated, relative to root_dir).

Returns:

hidden – Whether the path exists and is hidden.

Return type:

bool

is_writable(path)#

Does the API style path correspond to a writable directory or file?

Parameters:

path (str) – The path to check. This is an API path (/ separated, relative to root_dir).

Returns:

hidden – Whether the path exists and is writable.

Return type:

bool

max_copy_folder_size_mb#

The max folder size that can be copied

rename_file(old_path, new_path)#

Rename a file.

root_dir#

A trait for unicode strings.

save(model, path='')#

Save the file model and return the model with no content.

Tornado handlers for the contents web service.

Preliminary documentation at ipython/ipython

class jupyter_server.services.contents.handlers.CheckpointsHandler(application, request, **kwargs)#

Bases: ContentsAPIHandler

A checkpoints API handler.

get(path='')#

get lists checkpoints for a file

post(path='')#

post creates a new checkpoint

class jupyter_server.services.contents.handlers.ContentsAPIHandler(application, request, **kwargs)#

Bases: APIHandler

A contents API handler.

auth_resource = 'contents'#
class jupyter_server.services.contents.handlers.ContentsHandler(application, request, **kwargs)#

Bases: ContentsAPIHandler

A contents handler.

delete(path='')#

delete a file in the given path

get(path='')#

Return a model for a file or directory.

A directory model contains a list of models (without content) of the files and directories it contains.

location_url(path)#

Return the full URL location of a file.

Parameters:

path (unicode) – The API path of the file, such as “foo/bar.txt”.

patch(path='')#

PATCH renames a file or directory without re-uploading content.

post(path='')#

Create a new file in the specified path.

POST creates new files. The server always decides on the name.

POST /api/contents/path

New untitled, empty file or directory.

POST /api/contents/path

with body {“copy_from” : “/path/to/OtherNotebook.ipynb”} New copy of OtherNotebook in path

put(path='')#

Saves the file in the location specified by name and path.

PUT is very similar to POST, but the requester specifies the name, whereas with POST, the server picks the name.

PUT /api/contents/path/Name.ipynb

Save notebook at path/Name.ipynb. Notebook structure is specified in content key of JSON request body. If content is not specified, create a new empty notebook.

class jupyter_server.services.contents.handlers.ModifyCheckpointsHandler(application, request, **kwargs)#

Bases: ContentsAPIHandler

A checkpoints modification handler.

delete(path, checkpoint_id)#

delete clears a checkpoint for a given file

post(path, checkpoint_id)#

post restores a file from a checkpoint

class jupyter_server.services.contents.handlers.NotebooksRedirectHandler(application, request, **kwargs)#

Bases: JupyterHandler

Redirect /api/notebooks to /api/contents

SUPPORTED_METHODS = ('GET', 'PUT', 'PATCH', 'POST', 'DELETE')#
delete(path)#

Handle a notebooks redirect.

get(path)#

Handle a notebooks redirect.

patch(path)#

Handle a notebooks redirect.

post(path)#

Handle a notebooks redirect.

put(path)#

Handle a notebooks redirect.

class jupyter_server.services.contents.handlers.TrustNotebooksHandler(application, request, **kwargs)#

Bases: JupyterHandler

Handles trust/signing of notebooks

post(path='')#

Trust a notebook by path.

jupyter_server.services.contents.handlers.validate_model(model, expect_content=False, expect_hash=False)#

Validate a model returned by a ContentsManager method.

If expect_content is True, then we expect non-null entries for ‘content’ and ‘format’.

If expect_hash is True, then we expect non-null entries for ‘hash’ and ‘hash_algorithm’.

class jupyter_server.services.contents.largefilemanager.AsyncLargeFileManager(**kwargs)#

Bases: AsyncFileContentsManager

Handle large file upload asynchronously

async save(model, path='')#

Save the file model and return the model with no content.

class jupyter_server.services.contents.largefilemanager.LargeFileManager(**kwargs)#

Bases: FileContentsManager

Handle large file upload.

save(model, path='')#

Save the file model and return the model with no content.

A base class for contents managers.

class jupyter_server.services.contents.manager.AsyncContentsManager(**kwargs)#

Bases: ContentsManager

Base class for serving files and directories asynchronously.

checkpoints#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

checkpoints_class#

A trait whose value must be a subclass of a specified class.

checkpoints_kwargs#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

async copy(from_path, to_path=None)#

Copy an existing file and return its new model.

If to_path not specified, it will be the parent directory of from_path. If to_path is a directory, filename will increment from_path-Copy#.ext. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions except ipynb. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot.

from_path must be a full path to a file.

async create_checkpoint(path)#

Create a checkpoint.

async delete(path)#

Delete a file/directory and any associated checkpoints.

async delete_checkpoint(checkpoint_id, path)#

Delete a checkpoint for a path by id.

async delete_file(path)#

Delete the file or directory at path.

async dir_exists(path)#

Does a directory exist at the given path?

Like os.path.isdir

Override this method in subclasses.

Parameters:

path (str) – The path to check

Returns:

exists – Whether the path does indeed exist.

Return type:

bool

async exists(path)#

Does a file or directory exist at the given path?

Like os.path.exists

Parameters:

path (str) – The API path of a file or directory to check for.

Returns:

exists – Whether the target exists.

Return type:

bool

async file_exists(path='')#

Does a file exist at the given path?

Like os.path.isfile

Override this method in subclasses.

Parameters:

path (str) – The API path of a file to check for.

Returns:

exists – Whether the file exists.

Return type:

bool

async get(path, content=True, type=None, format=None, require_hash=False)#

Get a file or directory model.

Parameters:
  • require_hash (bool) – Whether the file hash must be returned or not.

  • 2.11* (*Changed in version) –

async increment_filename(filename, path='', insert='')#

Increment a filename until it is unique.

Parameters:
  • filename (unicode) – The name of a file, including extension

  • path (unicode) – The API path of the target’s directory

  • insert (unicode) – The characters to insert after the base filename

Returns:

name – A filename that is unique, based on the input filename.

Return type:

unicode

async is_hidden(path)#

Is path a hidden directory or file?

Parameters:

path (str) – The path to check. This is an API path (/ separated, relative to root dir).

Returns:

hidden – Whether the path is hidden.

Return type:

bool

async list_checkpoints(path)#

List the checkpoints for a path.

async new(model=None, path='')#

Create a new file or directory and return its model with no content.

To create a new untitled entity in a directory, use new_untitled.

async new_untitled(path='', type='', ext='')#

Create a new untitled file or directory in path

path must be a directory

File extension can be specified.

Use new to create files with a fully specified path (including filename).

async rename(old_path, new_path)#

Rename a file and any checkpoints associated with that file.

async rename_file(old_path, new_path)#

Rename a file or directory.

async restore_checkpoint(checkpoint_id, path)#

Restore a checkpoint.

async save(model, path)#

Save a file or directory model to path.

Should return the saved model with no content. Save implementations should call self.run_pre_save_hook(model=model, path=path) prior to writing any data.

async trust_notebook(path)#

Explicitly trust a notebook

Parameters:

path (str) – The path of a notebook

async update(model, path)#

Update the file’s path

For use in PATCH requests, to enable renaming a file without re-uploading its contents. Only used for renaming at the moment.

class jupyter_server.services.contents.manager.ContentsManager(**kwargs)#

Bases: LoggingConfigurable

Base class for serving files and directories.

This serves any text or binary file, as well as directories, with special handling for JSON notebook documents.

Most APIs take a path argument, which is always an API-style unicode path, and always refers to a directory.

  • unicode, not url-escaped

  • ‘/’-separated

  • leading and trailing ‘/’ will be stripped

  • if unspecified, path defaults to ‘’, indicating the root path.

allow_hidden#

Allow access to hidden files

check_and_sign(nb, path='')#

Check for trusted cells, and sign the notebook.

Called as a part of saving notebooks.

Parameters:
  • nb (dict) – The notebook dict

  • path (str) – The notebook’s path (for logging)

checkpoints#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

checkpoints_class#

A trait whose value must be a subclass of a specified class.

checkpoints_kwargs#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

copy(from_path, to_path=None)#

Copy an existing file and return its new model.

If to_path not specified, it will be the parent directory of from_path. If to_path is a directory, filename will increment from_path-Copy#.ext. Considering multi-part extensions, the Copy# part will be placed before the first dot for all the extensions except ipynb. For easier manual searching in case of notebooks, the Copy# part will be placed before the last dot.

from_path must be a full path to a file.

create_checkpoint(path)#

Create a checkpoint.

delete(path)#

Delete a file/directory and any associated checkpoints.

delete_checkpoint(checkpoint_id, path)#
delete_file(path)#

Delete the file or directory at path.

dir_exists(path)#

Does a directory exist at the given path?

Like os.path.isdir

Override this method in subclasses.

Parameters:

path (str) – The path to check

Returns:

exists – Whether the path does indeed exist.

Return type:

bool

emit(data)#

Emit event using the core event schema from Jupyter Server’s Contents Manager.

event_logger#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

event_schema_id = 'https://events.jupyter.org/jupyter_server/contents_service/v1'#
exists(path)#

Does a file or directory exist at the given path?

Like os.path.exists

Parameters:

path (str) – The API path of a file or directory to check for.

Returns:

exists – Whether the target exists.

Return type:

bool

file_exists(path='')#

Does a file exist at the given path?

Like os.path.isfile

Override this method in subclasses.

Parameters:

path (str) – The API path of a file to check for.

Returns:

exists – Whether the file exists.

Return type:

bool

files_handler_class#

handler class to use when serving raw file requests.

Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.

Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.

Access to these files should be Authenticated.

files_handler_params#

Extra parameters to pass to files_handler_class.

For example, StaticFileHandlers generally expect a path argument specifying the root directory from which to serve files.

get(path, content=True, type=None, format=None, require_hash=False)#

Get a file or directory model.

Parameters:
  • require_hash (bool) – Whether the file hash must be returned or not.

  • 2.11* (*Changed in version) –

get_extra_handlers()#

Return additional handlers

Default: self.files_handler_class on /files/.*

get_kernel_path(path, model=None)#

Return the API path for the kernel

KernelManagers can turn this value into a filesystem path, or ignore it altogether.

The default value here will start kernels in the directory of the notebook server. FileContentsManager overrides this to use the directory containing the notebook.

hide_globs#

Glob patterns to hide in file and directory listings.

increment_filename(filename, path='', insert='')#

Increment a filename until it is unique.

Parameters:
  • filename (unicode) – The name of a file, including extension

  • path (unicode) – The API path of the target’s directory

  • insert (unicode) – The characters to insert after the base filename

Returns:

name – A filename that is unique, based on the input filename.

Return type:

unicode

info_string()#

The information string for the manager.

is_hidden(path)#

Is path a hidden directory or file?

Parameters:

path (str) – The path to check. This is an API path (/ separated, relative to root dir).

Returns:

hidden – Whether the path is hidden.

Return type:

bool

list_checkpoints(path)#
log_info()#

Log the information string for the manager.

mark_trusted_cells(nb, path='')#

Mark cells as trusted if the notebook signature matches.

Called as a part of loading notebooks.

Parameters:
  • nb (dict) – The notebook object (in current nbformat)

  • path (str) – The notebook’s path (for logging)

new(model=None, path='')#

Create a new file or directory and return its model with no content.

To create a new untitled entity in a directory, use new_untitled.

new_untitled(path='', type='', ext='')#

Create a new untitled file or directory in path

path must be a directory

File extension can be specified.

Use new to create files with a fully specified path (including filename).

notary#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

post_save_hook#

Python callable or importstring thereof

to be called on the path of a file just saved.

This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.

It will be called as (all arguments passed by keyword):

hook(os_path=os_path, model=model, contents_manager=instance)
  • path: the filesystem path to the file just written

  • model: the model representing the file

  • contents_manager: this ContentsManager instance

pre_save_hook#

Python callable or importstring thereof

To be called on a contents model prior to save.

This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.

It will be called as (all arguments passed by keyword):

hook(path=path, model=model, contents_manager=self)
  • model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.

  • path: the API path of the save destination

  • contents_manager: this ContentsManager instance

preferred_dir#

Preferred starting directory to use for notebooks. This is an API path (/ separated, relative to root dir)

register_post_save_hook(hook)#

Register a post save hook.

register_pre_save_hook(hook)#

Register a pre save hook.

rename(old_path, new_path)#

Rename a file and any checkpoints associated with that file.

rename_file(old_path, new_path)#

Rename a file or directory.

restore_checkpoint(checkpoint_id, path)#

Restore a checkpoint.

root_dir#

A trait for unicode strings.

run_post_save_hook(model, os_path)#

Run the post-save hook if defined, and log errors

run_post_save_hooks(model, os_path)#

Run the post-save hooks if any, and log errors

run_pre_save_hook(model, path, **kwargs)#

Run the pre-save hook if defined, and log errors

run_pre_save_hooks(model, path, **kwargs)#

Run the pre-save hooks if any, and log errors

save(model, path)#

Save a file or directory model to path.

Should return the saved model with no content. Save implementations should call self.run_pre_save_hook(model=model, path=path) prior to writing any data.

should_list(name)#

Should this file/directory name be displayed in a listing?

trust_notebook(path)#

Explicitly trust a notebook

Parameters:

path (str) – The path of a notebook

untitled_directory#

The base name used when creating untitled directories.

untitled_file#

The base name used when creating untitled files.

untitled_notebook#

The base name used when creating untitled notebooks.

update(model, path)#

Update the file’s path

For use in PATCH requests, to enable renaming a file without re-uploading its contents. Only used for renaming at the moment.

validate_notebook_model(model, validation_error=None)#

Add failed-validation message to model

Module contents#
jupyter_server.services.events package#
Submodules#

A Websocket Handler for emitting Jupyter server events.

New in version 2.0.

class jupyter_server.services.events.handlers.EventHandler(application, request, **kwargs)#

Bases: APIHandler

REST api handler for events

auth_resource = 'events'#
post()#

Emit an event.

class jupyter_server.services.events.handlers.SubscribeWebsocket(application, request, **kwargs)#

Bases: JupyterHandler, WebSocketHandler

Websocket handler for subscribing to events

auth_resource = 'events'#
async event_listener(logger, schema_id, data)#

Write an event message.

Return type:

None

get(*args, **kwargs)#

Get an event socket.

on_close()#

Handle a socket close.

open()#

Routes events that are emitted by Jupyter Server’s EventBus to a WebSocket client in the browser.

async pre_get()#

Handles authorization when attempting to subscribe to events emitted by Jupyter Server’s eventbus.

jupyter_server.services.events.handlers.get_timestamp(data)#

Parses timestamp from the JSON request body

Return type:

Optional[datetime]

jupyter_server.services.events.handlers.validate_model(data)#

Validates for required fields in the JSON request body

Return type:

None

Module contents#
jupyter_server.services.kernels package#
Subpackages#
jupyter_server.services.kernels.connection package#
Submodules#
class jupyter_server.services.kernels.connection.abc.KernelWebsocketConnectionABC#

Bases: ABC

This class defines a minimal interface that should be used to bridge the connection between Jupyter Server’s websocket API and a kernel’s ZMQ socket interface.

abstract async connect()#

Connect the kernel websocket to the kernel ZMQ connections

abstract async disconnect()#

Disconnect the kernel websocket from the kernel ZMQ connections

abstract handle_incoming_message(incoming_msg)#

Broker the incoming websocket message to the appropriate ZMQ channel.

Return type:

None

abstract handle_outgoing_message(stream, outgoing_msg)#

Broker outgoing ZMQ messages to the kernel websocket.

Return type:

None

websocket_handler: Any#

Kernel connection helpers.

class jupyter_server.services.kernels.connection.base.BaseKernelWebsocketConnection(**kwargs)#

Bases: LoggingConfigurable

A configurable base class for connecting Kernel WebSockets to ZMQ sockets.

async connect()#

Handle a connect.

async disconnect()#

Handle a disconnect.

handle_incoming_message(incoming_msg)#

Handle an incoming message.

Return type:

None

handle_outgoing_message(stream, outgoing_msg)#

Handle an outgoing message.

Return type:

None

property kernel_id#

The kernel id.

kernel_info_timeout#

A float trait.

property kernel_manager#

The kernel manager.

kernel_ws_protocol#

None). If an empty string is passed, select the legacy protocol. If None, the selected protocol will depend on what the front-end supports (usually the most recent protocol supported by the back-end and the front-end).

Type:

Preferred kernel message protocol over websocket to use (default

property multi_kernel_manager#

The multi kernel manager.

session#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

property session_id#

The session id.

websocket_handler#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

jupyter_server.services.kernels.connection.base.deserialize_binary_message(bmsg)#

deserialize a message from a binary blog

Header:

4 bytes: number of msg parts (nbufs) as 32b int 4 * nbufs bytes: offset for each buffer as integer as 32b int

Offsets are from the start of the buffer, including the header.

Return type:

message dictionary

jupyter_server.services.kernels.connection.base.deserialize_msg_from_ws_v1(ws_msg)#

Deserialize a message using the v1 protocol.

jupyter_server.services.kernels.connection.base.serialize_binary_message(msg)#

serialize a message as a binary blob

Header:

4 bytes: number of msg parts (nbufs) as 32b int 4 * nbufs bytes: offset for each buffer as integer as 32b int

Offsets are from the start of the buffer, including the header.

Return type:

The message serialized to bytes.

jupyter_server.services.kernels.connection.base.serialize_msg_to_ws_v1(msg_or_list, channel, pack=None)#

Serialize a message using the v1 protocol.

An implementation of a kernel connection.

class jupyter_server.services.kernels.connection.channels.ZMQChannelsWebsocketConnection(**kwargs)#

Bases: BaseKernelWebsocketConnection

A Jupyter Server Websocket Connection

channels#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

close()#

Close the connection.

async classmethod close_all()#

Tornado does not provide a way to close open sockets, so add one.

connect()#

Handle a connection.

create_stream()#

Create a stream.

disconnect()#

Handle a disconnect.

get_part(field, value, msg_list)#

Get a part of a message.

handle_incoming_message(incoming_msg)#

Handle incoming messages from Websocket to ZMQ Sockets.

Return type:

None

handle_outgoing_message(stream, outgoing_msg)#

Handle the outgoing messages from ZMQ sockets to Websocket.

Return type:

None

iopub_data_rate_limit#

(bytes/sec) Maximum rate at which stream output can be sent on iopub before they are limited.

iopub_msg_rate_limit#

(msgs/sec) Maximum rate at which messages can be sent on iopub before they are limited.

kernel_info_channel#

A trait which allows any value.

limit_rate#

True). If True, use iopub_msg_rate_limit, iopub_data_rate_limit and/or rate_limit_window to tune the rate.

Type:

Whether to limit the rate of IOPub messages (default

nudge()#

Nudge the zmq connections with kernel_info_requests Returns a Future that will resolve when we have received a shell or control reply and at least one iopub message, ensuring that zmq subscriptions are established, sockets are fully connected, and kernel is responsive. Keeps retrying kernel_info_request until these are both received.

on_kernel_restarted()#

Handle a kernel restart.

on_restart_failed()#

Handle a kernel restart failure.

async prepare()#

Prepare a kernel connection.

rate_limit_window#

(sec) Time window used to check the message and data rate limits.

request_kernel_info()#

send a request for kernel_info

session_key#

A trait for unicode strings.

property subprotocol#

The sub protocol.

websocket_handler#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

property write_message#

Alias to the websocket handler’s write_message method.

write_stderr(error_message, parent_header)#

Write a message to stderr.

Module contents#
Submodules#

Tornado handlers for kernels.

Preliminary documentation at ipython/ipython

class jupyter_server.services.kernels.handlers.KernelActionHandler(application, request, **kwargs)#

Bases: KernelsAPIHandler

A kernel action API handler.

post(kernel_id, action)#

Interrupt or restart a kernel.

class jupyter_server.services.kernels.handlers.KernelHandler(application, request, **kwargs)#

Bases: KernelsAPIHandler

A kernel API handler.

delete(kernel_id)#

Remove a kernel.

get(kernel_id)#

Get a kernel model.

class jupyter_server.services.kernels.handlers.KernelsAPIHandler(application, request, **kwargs)#

Bases: APIHandler

A kernels API handler.

auth_resource = 'kernels'#
class jupyter_server.services.kernels.handlers.MainKernelHandler(application, request, **kwargs)#

Bases: KernelsAPIHandler

The root kernel handler.

get()#

Get the list of running kernels.

post()#

Start a kernel.

A MultiKernelManager for use in the Jupyter server

  • raises HTTPErrors

  • creates REST API models

class jupyter_server.services.kernels.kernelmanager.AsyncMappingKernelManager(**kwargs: Any)#

Bases: MappingKernelManager, AsyncMultiKernelManager

An asynchronous mapping kernel manager.

class jupyter_server.services.kernels.kernelmanager.MappingKernelManager(**kwargs: Any)#

Bases: MultiKernelManager

A KernelManager that handles - File mapping - HTTP error handling - Kernel message filtering

allow_tracebacks#

Whether to send tracebacks to clients on exceptions.

allowed_message_types#

White list of allowed kernel message types. When the list is empty, all message types are allowed.

buffer_offline_messages#

Whether messages from kernels whose frontends have disconnected should be buffered in-memory.

When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.

Disable if long-running kernels will produce too much output while no frontends are connected.

cull_busy#

Whether to consider culling kernels which are busy. Only effective if cull_idle_timeout > 0.

cull_connected#

Whether to consider culling kernels which have one or more connections. Only effective if cull_idle_timeout > 0.

cull_idle_timeout#

Timeout (in seconds) after which a kernel is considered idle and ready to be culled. Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.

cull_interval#

The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.

cull_interval_default = 300#
async cull_kernel_if_idle(kernel_id)#

Cull a kernel if it is idle.

async cull_kernels()#

Handle culling kernels.

cwd_for_path(path, **kwargs)#

Turn API path into absolute OS path.

get_buffer(kernel_id, session_key)#

Get the buffer for a given kernel

Parameters:
  • kernel_id (str) – The id of the kernel to stop buffering.

  • session_key (str, optional) – The session_key, if any, that should get the buffer. If the session_key matches the current buffered session_key, the buffer will be returned.

initialize_culler()#

Start idle culler if ‘cull_idle_timeout’ is greater than zero.

Regardless of that value, set flag that we’ve been here.

kernel_argv#

An instance of a Python list.

kernel_info_timeout#

Timeout for giving up on a kernel (in seconds).

On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).

kernel_model(kernel_id)#

Return a JSON-safe dict representing a kernel

For use in representing kernels in the JSON APIs.

last_kernel_activity#

The last activity on any kernel, including shutting down a kernel

list_kernels()#

Returns a list of kernel_id’s of kernels running.

notify_connect(kernel_id)#

Notice a new connection to a kernel

notify_disconnect(kernel_id)#

Notice a disconnection from a kernel

ports_changed(kernel_id)#

Used by ZMQChannelsHandler to determine how to coordinate nudge and replays.

Ports are captured when starting a kernel (via MappingKernelManager). Ports are considered changed (following restarts) if the referenced KernelManager is using a set of ports different from those captured at startup. If changes are detected, the captured set is updated and a value of True is returned.

NOTE: Use is exclusive to ZMQChannelsHandler because this object is a singleton instance while ZMQChannelsHandler instances are per WebSocket connection that can vary per kernel lifetime.

async restart_kernel(kernel_id, now=False)#

Restart a kernel by kernel_id

root_dir#

A trait for unicode strings.

async shutdown_kernel(kernel_id, now=False, restart=False)#

Shutdown a kernel by kernel_id

start_buffering(kernel_id, session_key, channels)#

Start buffering messages for a kernel

Parameters:
  • kernel_id (str) – The id of the kernel to stop buffering.

  • session_key (str) – The session_key, if any, that should get the buffer. If the session_key matches the current buffered session_key, the buffer will be returned.

  • channels (dict({'channel': ZMQStream})) – The zmq channels whose messages should be buffered.

async start_kernel(*, kernel_id=None, path=None, **kwargs)#

Start a kernel for a session and return its kernel_id.

Parameters:
  • kernel_id (uuid (str)) – The uuid to associate the new kernel with. If this is not None, this kernel will be persistent whenever it is requested.

  • path (API path) – The API path (unicode, ‘/’ delimited) for the cwd. Will be transformed to an OS path relative to root_dir.

  • kernel_name (str) – The name identifying which kernel spec to launch. This is ignored if an existing kernel is returned, but it may be checked in the future.

Return type:

str

start_watching_activity(kernel_id)#

Start watching IOPub messages on a kernel for activity.

  • update last_activity on every message

  • record execution_state from status messages

stop_buffering(kernel_id)#

Stop buffering kernel messages

Parameters:

kernel_id (str) – The id of the kernel to stop buffering.

stop_watching_activity(kernel_id)#

Stop watching IOPub messages on a kernel for activity.

traceback_replacement_message#

Message to print when allow_tracebacks is False, and an exception occurs

class jupyter_server.services.kernels.kernelmanager.ServerKernelManager(*args, **kwargs)#

Bases: AsyncIOLoopKernelManager

A server-specific kernel manager.

property core_event_schema_paths: list[Path]#
emit(schema_id, data)#

Emit an event from the kernel manager.

event_logger#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

execution_state#

The current execution state of the kernel

extra_event_schema_paths: List[str]#

A list of pathlib.Path objects pointing at to register with the kernel manager’s eventlogger.

async interrupt_kernel(*args, **kwargs)#

Interrupts the kernel by sending it a signal.

Unlike signal_kernel, this operation is well supported on all platforms.

last_activity#

The last activity on the kernel

reason#

The reason for the last failure against the kernel

async restart_kernel(*args, **kwargs)#

Restarts a kernel with the arguments that were used to launch it.

Parameters:
  • now (bool, optional) –

    If True, the kernel is forcefully restarted immediately, without having a chance to do any cleanup action. Otherwise the kernel is given 1s to clean up before a forceful restart is issued.

    In all cases the kernel is restarted, the only difference is whether it is given a chance to perform a clean shutdown or not.

  • newports (bool, optional) – If the old kernel was launched with random ports, this flag decides whether the same ports and connection file will be used again. If False, the same ports and connection file are used. This is the default. If True, new random port numbers are chosen and a new connection file is written. It is still possible that the newly chosen random port numbers happen to be the same as the old ones.

  • **kw (optional) – Any options specified here will overwrite those used to launch the kernel.

async shutdown_kernel(*args, **kwargs)#

Attempts to stop the kernel process cleanly.

This attempts to shutdown the kernels cleanly by:

  1. Sending it a shutdown message over the control channel.

  2. If that fails, the kernel is shutdown forcibly by sending it a signal.

Parameters:
  • now (bool) – Should the kernel be forcible killed now. This skips the first, nice shutdown attempt.

  • restart (bool) – Will this kernel be restarted after it is shutdown. When this is True, connection files will not be cleaned up.

async start_kernel(*args, **kwargs)#

Starts a kernel on this host in a separate process.

If random ports (port=0) are being used, this method must be called before the channels are created.

Parameters:

**kw (optional) – keyword arguments that are passed down to build the kernel_cmd and launching the kernel (e.g. Popen kwargs).

jupyter_server.services.kernels.kernelmanager.emit_kernel_action_event(success_msg='')#

Decorate kernel action methods to begin emitting jupyter kernel action events.

Parameters:
  • success_msg (str) – A formattable string that’s passed to the message field of the emitted event when the action succeeds. You can include the kernel_id, kernel_name, or action in the message using a formatted string argument, e.g. “{kernel_id} succeeded to {action}.”

  • error_msg (str) – A formattable string that’s passed to the message field of the emitted event when the action fails. You can include the kernel_id, kernel_name, or action in the message using a formatted string argument, e.g. “{kernel_id} failed to {action}.”

Return type:

Callable[..., Any]

Tornado handlers for WebSocket <-> ZMQ sockets.

class jupyter_server.services.kernels.websocket.KernelWebsocketHandler(application, request, **kwargs)#

Bases: WebSocketMixin, WebSocketHandler, JupyterHandler

The kernels websocket should connect

auth_resource = 'kernels'#
get(kernel_id)#

Handle a get request for a kernel.

get_compression_options()#

Get the socket connection options.

property kernel_websocket_connection_class#

The kernel websocket connection class.

on_close()#

Handle a socket closure.

on_message(ws_message)#

Get a kernel message from the websocket and turn it into a ZMQ message.

async open(kernel_id)#

Open a kernel websocket.

async pre_get()#

Handle a pre_get.

select_subprotocol(subprotocols)#

Select the sub protocol for the socket.

set_default_headers()#

Undo the set_default_headers in JupyterHandler

which doesn’t make sense for websockets

Module contents#
jupyter_server.services.kernelspecs package#
Submodules#

Tornado handlers for kernel specifications.

Preliminary documentation at ipython/ipython

class jupyter_server.services.kernelspecs.handlers.KernelSpecHandler(application, request, **kwargs)#

Bases: KernelSpecsAPIHandler

A handler for an individual kernel spec.

get(kernel_name)#

Get a kernel spec model.

class jupyter_server.services.kernelspecs.handlers.KernelSpecsAPIHandler(application, request, **kwargs)#

Bases: APIHandler

A kernel spec API handler.

auth_resource = 'kernelspecs'#
class jupyter_server.services.kernelspecs.handlers.MainKernelSpecHandler(application, request, **kwargs)#

Bases: KernelSpecsAPIHandler

The root kernel spec handler.

get()#

Get the list of kernel specs.

jupyter_server.services.kernelspecs.handlers.is_kernelspec_model(spec_dict)#

Returns True if spec_dict is already in proper form. This will occur when using a gateway.

jupyter_server.services.kernelspecs.handlers.kernelspec_model(handler, name, spec_dict, resource_dir)#

Load a KernelSpec by name and return the REST API model

Module contents#
jupyter_server.services.nbconvert package#
Submodules#

API Handlers for nbconvert.

class jupyter_server.services.nbconvert.handlers.NbconvertRootHandler(application, request, **kwargs)#

Bases: APIHandler

The nbconvert root API handler.

auth_resource = 'nbconvert'#
get()#

Get the list of nbconvert exporters.

initialize(**kwargs)#

Initialize an nbconvert root handler.

Module contents#
jupyter_server.services.security package#
Submodules#

Tornado handlers for security logging.

class jupyter_server.services.security.handlers.CSPReportHandler(application, request, **kwargs)#

Bases: APIHandler

Accepts a content security policy violation report

auth_resource = 'csp'#

Don’t check XSRF for CSP reports.

post()#

Log a content security policy violation report

skip_check_origin()#

Don’t check origin when reporting origin-check violations!

Module contents#
jupyter_server.services.sessions package#
Submodules#

Tornado handlers for the sessions web service.

Preliminary documentation at ipython/ipython

class jupyter_server.services.sessions.handlers.SessionHandler(application, request, **kwargs)#

Bases: SessionsAPIHandler

A handler for a single session.

delete(session_id)#

Delete the session with given session_id.

get(session_id)#

Get the JSON model for a single session.

patch(session_id)#

Patch updates sessions:

  • path updates session to track renamed paths

  • kernel.name starts a new kernel with a given kernelspec

class jupyter_server.services.sessions.handlers.SessionRootHandler(application, request, **kwargs)#

Bases: SessionsAPIHandler

A Session Root API handler.

get()#

Get a list of running sessions.

post()#

Create a new session.

class jupyter_server.services.sessions.handlers.SessionsAPIHandler(application, request, **kwargs)#

Bases: APIHandler

A Sessions API handler.

auth_resource = 'sessions'#

A base class session manager.

class jupyter_server.services.sessions.sessionmanager.KernelSessionRecord(session_id=None, kernel_id=None)#

Bases: object

A record object for tracking a Jupyter Server Kernel Session.

Two records that share a session_id must also share a kernel_id, while kernels can have multiple session (and thereby) session_ids associated with them.

kernel_id: Optional[str] = None#
session_id: Optional[str] = None#
update(other)#

Updates in-place a kernel from other (only accepts positive updates

Return type:

None

exception jupyter_server.services.sessions.sessionmanager.KernelSessionRecordConflict#

Bases: Exception

Exception class to use when two KernelSessionRecords cannot merge because of conflicting data.

class jupyter_server.services.sessions.sessionmanager.KernelSessionRecordList(*records)#

Bases: object

An object for storing and managing a list of KernelSessionRecords.

When adding a record to the list, the KernelSessionRecordList first checks if the record already exists in the list. If it does, the record will be updated with the new information; otherwise, it will be appended.

get(record)#

Return a full KernelSessionRecord from a session_id, kernel_id, or incomplete KernelSessionRecord.

Return type:

KernelSessionRecord

remove(record)#

Remove a record if its found in the list. If it’s not found, do nothing.

Return type:

None

update(record)#

Update a record in-place or append it if not in the list.

Return type:

None

class jupyter_server.services.sessions.sessionmanager.SessionManager(**kwargs: Any)#

Bases: LoggingConfigurable

A session manager.

close()#

Close the sqlite connection

property connection#

Start a database connection

contents_manager#

A trait whose value must be an instance of a class in a specified list of classes. The value can also be an instance of a subclass of the specified classes. Subclasses can declare default classes by overriding the klass attribute

async create_session(path=None, name=None, type=None, kernel_name=None, kernel_id=None)#

Creates a session and returns its model

Parameters:

name (ModelName(str)) – Usually the model name, like the filename associated with current kernel.

Return type:

Dict[str, Any]

property cursor#

Start a cursor and create a database called ‘session’

database_filepath#

` setting from sqlite3) and does not persist when the current Jupyter Server shuts down.

Type:

The filesystem path to SQLite Database file (e.g. /path/to/session_database.db). By default, the session database is stored in-memory (i.e. `

Type:

memory

async delete_session(session_id)#

Deletes the row in the session database with given session_id

get_kernel_env(path, name=None)#

Return the environment variables that need to be set in the kernel

Parameters:
  • path (str) – the url path for the given session.

  • name (ModelName(str), optional) – Here the name is likely to be the name of the associated file with the current kernel at startup time.

Return type:

Dict[str, str]

async get_session(**kwargs)#

Returns the model for a particular session.

Takes a keyword argument and searches for the value in the session database, then returns the rest of the session’s info.

Parameters:

**kwargs (dict) – must be given one of the keywords and values from the session database (i.e. session_id, path, name, type, kernel_id)

Returns:

model – returns a dictionary that includes all the information from the session described by the kwarg.

Return type:

dict

async kernel_culled(kernel_id)#

Checks if the kernel is still considered alive and returns true if its not found.

Return type:

bool

kernel_manager#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

async list_sessions()#

Returns a list of dictionaries containing all the information from the session database

new_session_id()#

Create a uuid for a new session

Return type:

str

async row_to_model(row, tolerate_culled=False)#

Takes sqlite database session row and turns it into a dictionary

async save_session(session_id, path=None, name=None, type=None, kernel_id=None)#

Saves the items for the session with the given session_id

Given a session_id (and any other of the arguments), this method creates a row in the sqlite session database that holds the information for a session.

Parameters:
  • session_id (str) – uuid for the session; this method must be given a session_id

  • path (str) – the path for the given session

  • name (str) – the name of the session

  • type (str) – the type of the session

  • kernel_id (str) – a uuid for the kernel associated with this session

Returns:

model – a dictionary of the session model

Return type:

dict

async session_exists(path)#

Check to see if the session of a given name exists

async start_kernel_for_session(session_id, path, name, type, kernel_name)#

Start a new kernel for a given session.

Parameters:
  • session_id (str) – uuid for the session; this method must be given a session_id

  • path (str) – the path for the given session - seem to be a session id sometime.

  • name (str) – Usually the model name, like the filename associated with current kernel.

  • type (str) – the type of the session

  • kernel_name (str) – the name of the kernel specification to use. The default kernel name will be used if not provided.

Return type:

str

async update_session(session_id, **kwargs)#

Updates the values in the session database.

Changes the values of the session with the given session_id with the values from the keyword arguments.

Parameters:
  • session_id (str) – a uuid that identifies a session in the sqlite3 database

  • **kwargs (str) – the key must correspond to a column title in session database, and the value replaces the current value in the session with session_id.

Module contents#
Submodules#

HTTP handler to shut down the Jupyter server.

class jupyter_server.services.shutdown.ShutdownHandler(application, request, **kwargs)#

Bases: JupyterHandler

A shutdown API handler.

auth_resource = 'server'#
post()#

Shut down the server.

Module contents#
jupyter_server.view package#
Submodules#

Tornado handlers for viewing HTML files.

class jupyter_server.view.handlers.ViewHandler(application, request, **kwargs)#

Bases: JupyterHandler

Render HTML files within an iframe.

auth_resource = 'contents'#
get(path)#

Get a view on a given path.

Module contents#

Tornado handlers for viewing HTML files.

Submodules#

Manager to read and modify config data in JSON files.

class jupyter_server.config_manager.BaseJSONConfigManager(**kwargs)#

Bases: LoggingConfigurable

General JSON config manager

Deals with persisting/storing config in a json file with optionally default values in a {section_name}.d directory.

config_dir#

A trait for unicode strings.

directory(section_name)#

Returns the directory name for the section name: {config_dir}/{section_name}.d

Return type:

str

ensure_config_dir_exists()#

Will try to create the config_dir directory.

Return type:

None

file_name(section_name)#

Returns the json filename for the section_name: {config_dir}/{section_name}.json

Return type:

str

get(section_name, include_root=True)#

Retrieve the config data for the specified section.

Returns the data as a dictionary, or an empty dictionary if the file doesn’t exist.

When include_root is False, it will not read the root .json file, effectively returning the default values.

Return type:

dict[str, Any]

read_directory#

A boolean (True, False) trait.

set(section_name, data)#

Store the given config data.

Return type:

None

update(section_name, new_data)#

Modify the config section by recursively updating it with new_data.

Returns the modified config data as a dictionary.

Return type:

dict[str, Any]

jupyter_server.config_manager.recursive_update(target, new)#

Recursively update one dictionary using another.

None values will delete their keys.

Return type:

None

jupyter_server.config_manager.remove_defaults(data, defaults)#

Recursively remove items from dict that are already in defaults

Return type:

None

Log utilities.

jupyter_server.log.log_request(handler)#

log a bit more information about each request than tornado’s default

  • move static file get success to debug-level (reduces noise)

  • get proxied IP instead of proxy IP

  • log referer for redirect and failed requests

  • log user-agent for failed requests

A tornado based Jupyter server.

class jupyter_server.serverapp.JupyterPasswordApp(**kwargs)#

Bases: JupyterApp

Set a password for the Jupyter server.

Setting a password secures the Jupyter server and removes the need for token-based authentication.

description: str = 'Set a password for the Jupyter server.\n\n    Setting a password secures the Jupyter server\n    and removes the need for token-based authentication.\n    '#
start()#

Start the password app.

class jupyter_server.serverapp.JupyterServerListApp(**kwargs)#

Bases: JupyterApp

An application to list running Jupyter servers.

description: str = 'List currently running Jupyter servers.'#
flags: StrDict = {'json': ({'JupyterServerListApp': {'json': True}}, 'Produce machine-readable JSON object on each line of output.'), 'jsonlist': ({'JupyterServerListApp': {'jsonlist': True}}, 'Produce machine-readable JSON list output.')}#
json#

If True, each line of output will be a JSON object with the details from the server info file. For a JSON list output, see the JupyterServerListApp.jsonlist configuration value

jsonlist#

If True, the output will be a JSON list of objects, one per active Jupyer server, each with the details from the relevant server info file.

start()#

Start the server list application.

version: str = '2.14.0'#
class jupyter_server.serverapp.JupyterServerStopApp(**kwargs)#

Bases: JupyterApp

An application to stop a Jupyter server.

description: str = 'Stop currently running Jupyter server for a given port'#
parse_command_line(argv=None)#

Parse command line options.

port#

Port of the server to be killed. Default 8888

shutdown_server(server)#

Shut down a server.

sock#

UNIX socket of the server to be killed.

start()#

Start the server stop app.

version: str = '2.14.0'#
class jupyter_server.serverapp.ServerApp(**kwargs)#

Bases: JupyterApp

The Jupyter Server application class.

aliases: StrDict#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

allow_credentials#

true header

Type:

Set the Access-Control-Allow-Credentials

allow_external_kernels#

Whether or not to allow external kernels, whose connection files are placed in external_connection_dir.

allow_origin#

Set the Access-Control-Allow-Origin header

Use ‘*’ to allow any origin to access your server.

Takes precedence over allow_origin_pat.

allow_origin_pat#

Use a regular expression for the Access-Control-Allow-Origin header

Requests from an origin matching the expression will get replies with:

Access-Control-Allow-Origin: origin

where origin is the origin of the request.

Ignored if allow_origin is set.

allow_password_change#

DEPRECATED in 2.0. Use PasswordIdentityProvider.allow_password_change

allow_remote_access#

Allow requests where the Host header doesn’t point to a local server

By default, requests get a 403 forbidden response if the ‘Host’ header shows that the browser thinks it’s on a non-local domain. Setting this option to True disables this check.

This protects against ‘DNS rebinding’ attacks, where a remote web server serves you a page and then changes its DNS to send later requests to a local IP, bypassing same-origin checks.

Local IP addresses (such as 127.0.0.1 and ::1) are allowed as local, along with hostnames configured in local_hostnames.

allow_root#

Whether to allow the user to run the server as root.

allow_unauthenticated_access#

Allow unauthenticated access to endpoints without authentication rule.

When set to True (default in jupyter-server 2.0, subject to change in the future), any request to an endpoint without an authentication rule (either @tornado.web.authenticated, or @allow_unauthenticated) will be permitted, regardless of whether user has logged in or not.

When set to False, logging in will be required for access to each endpoint, excluding the endpoints marked with @allow_unauthenticated decorator.

This option can be configured using JUPYTER_SERVER_ALLOW_UNAUTHENTICATED_ACCESS environment variable: any non-empty value other than “true” and “yes” will prevent unauthenticated access to endpoints without @allow_unauthenticated.

authenticate_prometheus#

” Require authentication to access prometheus metrics.

authorizer_class#

The authorizer class to use.

autoreload#

Reload the webapp when changes are made to any Python src files.

base_url#

The base URL for the Jupyter server.

Leading and trailing slashes can be omitted, and will automatically be added.

browser#

Specify what command to use to invoke a web browser when starting the server. If not specified, the default browser will be determined by the webbrowser standard library module, which allows setting of the BROWSER environment variable to override it.

browser_open_file#

A trait for unicode strings.

browser_open_file_to_run#

A trait for unicode strings.

certfile#

The full path to an SSL/TLS certificate file.

classes: ClassesType = [<class 'jupyter_client.manager.KernelManager'>, <class 'jupyter_client.session.Session'>, <class 'jupyter_server.services.kernels.kernelmanager.MappingKernelManager'>, <class 'jupyter_client.kernelspec.KernelSpecManager'>, <class 'jupyter_server.services.kernels.kernelmanager.AsyncMappingKernelManager'>, <class 'jupyter_server.services.contents.manager.ContentsManager'>, <class 'jupyter_server.services.contents.filemanager.FileContentsManager'>, <class 'jupyter_server.services.contents.manager.AsyncContentsManager'>, <class 'jupyter_server.services.contents.filemanager.AsyncFileContentsManager'>, <class 'nbformat.sign.NotebookNotary'>, <class 'jupyter_server.gateway.managers.GatewayMappingKernelManager'>, <class 'jupyter_server.gateway.managers.GatewayKernelSpecManager'>, <class 'jupyter_server.gateway.managers.GatewaySessionManager'>, <class 'jupyter_server.gateway.connections.GatewayWebSocketConnection'>, <class 'jupyter_server.gateway.gateway_client.GatewayClient'>, <class 'jupyter_server.auth.authorizer.Authorizer'>, <class 'jupyter_events.logger.EventLogger'>, <class 'jupyter_server.services.kernels.connection.channels.ZMQChannelsWebsocketConnection'>]#
async cleanup_extensions()#

Call shutdown hooks in all extensions.

Return type:

None

async cleanup_kernels()#

Shutdown all kernels.

The kernels will shutdown themselves when this process no longer exists, but explicit shutdown allows the KernelManagers to cleanup the connection files.

Return type:

None

client_ca#

The full path to a certificate authority certificate for SSL/TLS client authentication.

config_manager_class#

The config manager class to use

property connection_url: str#
contents_manager_class#

The content manager class to use.

cookie_options#

DEPRECATED. Use IdentityProvider.cookie_options

cookie_secret#

The random bytes used to secure cookies. By default this is a new random number every time you start the server. Set it to a value in a config file to enable logins to persist across server sessions.

Note: Cookie secrets should be kept private, do not share config files with cookie_secret stored in plaintext (you can read the value from a file).

cookie_secret_file#

The file where the cookie secret is stored.

custom_display_url#

Override URL shown to users.

Replace actual URL, including protocol, address, port and base URL, with the given value when displaying URL to the users. Do not change the actual connection URL. If authentication token is enabled, the token is added to the custom URL automatically.

This option is intended to be used when the URL to display to the user cannot be determined reliably by the Jupyter server (proxified or containerized setups for example).

default_services = ('api', 'auth', 'config', 'contents', 'files', 'kernels', 'kernelspecs', 'nbconvert', 'security', 'sessions', 'shutdown', 'view', 'events')#
default_url#

The default URL to redirect to from /

description: str = 'The Jupyter Server.\n\n    This launches a Tornado-based Jupyter Server.'#
disable_check_xsrf#

Disable cross-site-request-forgery protection

Jupyter server includes protection from cross-site request forgeries, requiring API requests to either:

  • originate from pages served by this server (validated with XSRF cookie and token), or

  • authenticate with a token

Some anonymous compute resources still desire the ability to run code, completely without authentication. These services can disable all authentication and security checks, with the full knowledge of what that implies.

property display_url: str#

Human readable string with URLs for interacting with the running Jupyter Server

event_logger#

An EventLogger for emitting structured event data from Jupyter Server and extensions.

examples: str | Unicode[str, str | bytes] = '\njupyter server                       # start the server\njupyter server  --certfile=mycert.pem # use SSL/TLS certificate\njupyter server password              # enter a password to protect the server\n'#
external_connection_dir#

The directory to look at for external kernel connection files, if allow_external_kernels is True. Defaults to Jupyter runtime_dir/external_kernels. Make sure that this directory is not filled with left-over connection files, that could result in unnecessary kernel manager creations.

extra_services#

handlers that should be loaded at higher priority than the default services

extra_static_paths#

Extra paths to search for serving static files.

This allows adding javascript/css to be available from the Jupyter server machine, or overriding individual files in the IPython

extra_template_paths#

Extra paths to search for serving jinja templates.

Can be used to override templates from jupyter_server.templates.

file_to_run#

Open the named file when the application is launched.

file_url_prefix#

The URL prefix where files are opened directly.

find_server_extensions()#

Searches Jupyter paths for jpserver_extensions.

Return type:

None

flags: StrDict#

An instance of a Python dict.

One or more traits can be passed to the constructor to validate the keys and/or values of the dict. If you need more detailed validation, you may use a custom validator method.

Changed in version 5.0: Added key_trait for validating dict keys.

Changed in version 5.0: Deprecated ambiguous trait, traits args in favor of value_trait, per_key_traits.

DEPRECATED. Use IdentityProvider.get_secure_cookie_kwargs

property http_server: HTTPServer#

An instance of Tornado’s HTTPServer class for the Server Web Application.

identity_provider_class#

The identity provider class to use.

info_file#

A trait for unicode strings.

init_components()#

Check the components submodule, and warn if it’s unclean

Return type:

None

init_configurables()#

Initialize configurables.

Return type:

None

init_event_logger()#

Initialize the Event Bus.

Return type:

None

init_httpserver()#

Creates an instance of a Tornado HTTPServer for the Server Web Application and sets the http_server attribute.

Return type:

None

init_ioloop()#

init self.io_loop so that an extension can use it by io_loop.call_later() to create background tasks

Return type:

None

init_logging()#

Initialize logging.

Return type:

None

init_mime_overrides()#
Return type:

None

init_resources()#

initialize system resources

Return type:

None

init_server_extensions()#

If an extension’s metadata includes an ‘app’ key, the value must be a subclass of ExtensionApp. An instance of the class will be created at this step. The config for this instance will inherit the ServerApp’s config object and load its own config.

Return type:

None

init_shutdown_no_activity()#

Initialize a shutdown on no activity.

Return type:

None

init_signal()#

Initialize signal handlers.

Return type:

None

init_webapp()#

initialize tornado webapp

Return type:

None

initialize(argv=None, find_extensions=True, new_httpserver=True, starter_extension=None)#

Initialize the Server application class, configurables, web application, and http server.

Parameters:
  • argv (list or None) – CLI arguments to parse.

  • find_extensions (bool) – If True, find and load extensions listed in Jupyter config paths. If False, only load extensions that are passed to ServerApp directly through the argv, config, or jpserver_extensions arguments.

  • new_httpserver (bool) – If True, a tornado HTTPServer instance will be created and configured for the Server Web Application. This will set the http_server attribute of this class.

  • starter_extension (str) – If given, it references the name of an extension point that started the Server. We will try to load configuration from extension point

Return type:

None

iopub_data_rate_limit#

DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_data_rate_limit

iopub_msg_rate_limit#

DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_msg_rate_limit

ip#

The IP address the Jupyter server will listen on.

jinja_environment_options#

Supply extra arguments that will be passed to Jinja environment.

jinja_template_vars#

Extra variables to supply to jinja templates when rendering.

jpserver_extensions#

Dict of Python modules to load as Jupyter server extensions.Entry values can be used to enable and disable the loading ofthe extensions. The extensions will be loaded in alphabetical order.

kernel_manager_class#

The kernel manager class to use.

kernel_spec_manager#

A trait whose value must be an instance of a specified class.

The value can also be an instance of a subclass of the specified class.

Subclasses can declare default classes by overriding the klass attribute

kernel_spec_manager_class#

The kernel spec manager class to use. Should be a subclass of jupyter_client.kernelspec.KernelSpecManager.

The Api of KernelSpecManager is provisional and might change without warning between this version of Jupyter and the next stable one.

kernel_websocket_connection_class#

The kernel websocket connection class to use.

kernel_ws_protocol#

DEPRECATED. Use ZMQChannelsWebsocketConnection.kernel_ws_protocol

keyfile#

The full path to a private key file for usage with SSL/TLS.

launch_browser()#

Launch the browser.

Return type:

None

limit_rate#

DEPRECATED. Use ZMQChannelsWebsocketConnection.limit_rate

load_server_extensions()#

Load any extensions specified by config.

Import the module, then call the load_jupyter_server_extension function, if one exists.

The extension API is experimental, and may change in future releases.

Return type:

None

local_hostnames#

Hostnames to allow as local when allow_remote_access is False.

Local IP addresses (such as 127.0.0.1 and ::1) are automatically accepted as local as well.

property local_url: str#
login_handler_class#

The login handler class to use.

logout_handler_class#

The logout handler class to use.

max_body_size#

Sets the maximum allowed size of the client request body, specified in the Content-Length request header field. If the size in a request exceeds the configured value, a malformed HTTP message is returned to the client.

Note: max_body_size is applied even in streaming mode.

max_buffer_size#

Gets or sets the maximum amount of memory, in bytes, that is allocated for use by the buffer manager.

min_open_files_limit#

Gets or sets a lower bound on the open file handles process resource limit. This may need to be increased if you run into an OSError: [Errno 24] Too many open files. This is not applicable when running on Windows.

name: str | Unicode[str, str | bytes] = 'jupyter-server'#
no_browser_open_file#

If True, do not write redirect HTML file disk, or show in messages.

notebook_dir#

DEPRECATED, use root_dir.

open_browser#

Whether to open in a browser after starting. The specific browser used is platform dependent and determined by the python standard library webbrowser module, unless it is overridden using the –browser (ServerApp.browser) configuration option.

parse_command_line(argv=None)#

Parse the command line options.

Return type:

None

password#

DEPRECATED in 2.0. Use PasswordIdentityProvider.hashed_password

password_required#

DEPRECATED in 2.0. Use PasswordIdentityProvider.password_required

port#

JUPYTER_PORT).

Type:

The port the server will listen on (env

port_default_value = 8888#
port_env = 'JUPYTER_PORT'#
port_retries#

JUPYTER_PORT_RETRIES).

Type:

The number of additional ports to try if the specified port is not available (env

port_retries_default_value = 50#
port_retries_env = 'JUPYTER_PORT_RETRIES'#
preferred_dir#

Preferred starting directory to use for notebooks and kernels. ServerApp.preferred_dir is deprecated in jupyter-server 2.0. Use FileContentsManager.preferred_dir instead

property public_url: str#
pylab#

use %pylab or %matplotlib in the notebook to enable matplotlib.

Type:

DISABLED

quit_button#

If True, display controls to shut down the Jupyter server, such as menu items or buttons.

rate_limit_window#

DEPRECATED. Use ZMQChannelsWebsocketConnection.rate_limit_window

remove_browser_open_file()#

Remove the jpserver-<pid>-open.html file created for this server.

Ignores the error raised when the file has already been removed.

Return type:

None

remove_browser_open_files()#

Remove the browser_open_file and browser_open_file_to_run files created for this server.

Ignores the error raised when the file has already been removed.

Return type:

None

remove_server_info_file()#

Remove the jpserver-<pid>.json file created for this server.

Ignores the error raised when the file has already been removed.

Return type:

None

reraise_server_extension_failures#

Reraise exceptions encountered loading server extensions?

root_dir#

The directory to use for notebooks and kernels.

running_server_info(kernel_count=True)#

Return the current working directory and the server url information

Return type:

str

server_info()#

Return a JSONable dict of information about this server.

Return type:

dict[str, Any]

session_manager_class#

The session manager class to use.

shutdown_no_activity()#

Shutdown server on timeout when there are no kernels or terminals.

Return type:

None

shutdown_no_activity_timeout#

it may shut down up to a minute later. 0 (the default) disables this automatic shutdown.

Type:

Shut down the server after N seconds with no kernelsrunning and no activity. This can be used together with culling idle kernels (MappingKernelManager.cull_idle_timeout) to shutdown the Jupyter server when it’s not in use. This is not precisely timed

sock#

The UNIX socket the Jupyter server will listen on.

sock_mode#

0600).

Type:

The permissions mode for UNIX socket creation (default

ssl_options#

Supply SSL options for the tornado HTTPServer. See the tornado docs for details.

start()#

Start the Jupyter server app, after initialization

This method takes no arguments so all configuration and initialization must be done prior to calling this method.

Return type:

None

start_app()#

Start the Jupyter Server application.

Return type:

None

start_ioloop()#

Start the IO Loop.

Return type:

None

property starter_app: Any#

Get the Extension that started this server.

static_custom_path#

Path to search for custom.js, css

property static_file_path: list[str]#

return extra paths + the default location

static_immutable_cache#

Paths to set up static files as immutable.

This allow setting up the cache control of static files as immutable. It should be used for static file named with a hash for instance.

stop(from_signal=False)#

Cleanup resources and stop the server.

Return type:

None

subcommands: dict[str, t.Any] = {'extension': (<class 'jupyter_server.extension.serverextension.ServerExtensionApp'>, 'Work with Jupyter server extensions'), 'list': (<class 'jupyter_server.serverapp.JupyterServerListApp'>, 'List currently running Jupyter servers.'), 'password': (<class 'jupyter_server.serverapp.JupyterPasswordApp'>, 'Set a password for the Jupyter server.'), 'stop': (<class 'jupyter_server.serverapp.JupyterServerStopApp'>, 'Stop currently running Jupyter server for a given port')}#
property template_file_path: list[str]#

return extra paths + the default locations

terminado_settings#

Supply overrides for terminado. Currently only supports “shell_command”.

terminals_enabled#

Set to False to disable terminals.

This does not make the server more secure by itself. Anything the user can in a terminal, they can also do in a notebook.

Terminals may also be automatically disabled if the terminado package is not available.

token#

DEPRECATED. Use IdentityProvider.token

tornado_settings#

Supply overrides for the tornado.web.Application that the Jupyter server uses.

trust_xheaders#

Whether to trust or not X-Scheme/X-Forwarded-Proto and X-Real-Ip/X-Forwarded-For headerssent by the upstream reverse proxy. Necessary if the proxy handles SSL

use_redirect_file#

Disable launching browser by redirect file For versions of notebook > 5.7.2, a security feature measure was added that prevented the authentication token used to launch the browser from being visible. This feature makes it difficult for other users on a multi-user system from running code in your Jupyter session as you. However, some environments (like Windows Subsystem for Linux (WSL) and Chromebooks), launching a browser using a redirect file can lead the browser failing to load. This is because of the difference in file structures/paths between the runtime and the browser.

Disabling this setting to False will disable this behavior, allowing the browser to launch by using a URL and visible token (as before).

version: str = '2.14.0'#
webbrowser_open_new#

Specify where to open the server on startup. This is the new argument passed to the standard library method webbrowser.open. The behaviour is not guaranteed, but depends on browser support. Valid values are:

  • 2 opens a new tab,

  • 1 opens a new window,

  • 0 opens in an existing window.

See the webbrowser.open documentation for details.

websocket_compression_options#

Set the tornado compression options for websocket connections.

This value will be returned from WebSocketHandler.get_compression_options(). None (default) will disable compression. A dict (even an empty one) will enable compression.

See the tornado docs for WebSocketHandler.get_compression_options for details.

websocket_ping_interval#

Configure the websocket ping interval in seconds.

Websockets are long-lived connections that are used by some Jupyter Server extensions.

Periodic pings help to detect disconnected clients and keep the connection active. If this is set to None, then no pings will be performed.

When a ping is sent, the client has websocket_ping_timeout seconds to respond. If no response is received within this period, the connection will be closed from the server side.

websocket_ping_timeout#

Configure the websocket ping timeout in seconds.

See websocket_ping_interval for details.

websocket_url#

The base URL for websockets, if it differs from the HTTP server (hint: it almost certainly doesn’t).

Should be in the form of an HTTP origin: ws[s]://hostname[:port]

write_browser_open_file()#

Write an jpserver-<pid>-open.html file

This can be used to open the notebook in a browser

Return type:

None

write_browser_open_files()#

Write an browser_open_file and browser_open_file_to_run files

This can be used to open a file directly in a browser.

Return type:

None

write_server_info_file()#

Write the result of server_info() to the JSON file info_file.

Return type:

None

class jupyter_server.serverapp.ServerWebApplication(jupyter_app, default_services, kernel_manager, contents_manager, session_manager, kernel_spec_manager, config_manager, event_logger, extra_services, log, base_url, default_url, settings_overrides, jinja_env_options, *, authorizer=None, identity_provider=None, kernel_websocket_connection_class=None, websocket_ping_interval=None, websocket_ping_timeout=None)#

Bases: Application

A server web application.

add_handlers(host_pattern, host_handlers)#

Appends the given handlers to our handler list.

Host patterns are processed sequentially in the order they were added. All matching patterns will be considered.

init_handlers(default_services, settings)#

Load the (URL pattern, handler) tuples for each component.

init_settings(jupyter_app, kernel_manager, contents_manager, session_manager, kernel_spec_manager, config_manager, event_logger, extra_services, log, base_url, default_url, settings_overrides, jinja_env_options=None, *, authorizer=None, identity_provider=None, kernel_websocket_connection_class=None, websocket_ping_interval=None, websocket_ping_timeout=None)#

Initialize settings for the web application.

last_activity()#

Get a UTC timestamp for when the server last did something.

Includes: API activity, kernel activity, kernel shutdown, and terminal activity.

jupyter_server.serverapp.list_running_servers(runtime_dir=None, log=None)#

Iterate over the server info files of running Jupyter servers.

Given a runtime directory, find jpserver-* files in the security directory, and yield dicts of their information, each one pertaining to a currently running Jupyter server instance.

Return type:

Generator[Any, None, None]

jupyter_server.serverapp.load_handlers(name)#

Load the (URL pattern, handler) tuples for each component.

Return type:

Any

jupyter_server.serverapp.random_ports(port, n)#

Generate a list of n random ports near the given port.

The first 5 ports will be sequential, and the remaining n-5 will be randomly selected in the range [port-2*n, port+2*n].

Return type:

Generator[int, None, None]

jupyter_server.serverapp.shutdown_server(server_info, timeout=5, log=None)#

Shutdown a Jupyter server in a separate process.

server_info should be a dictionary as produced by list_running_servers().

Will first try to request shutdown using /api/shutdown . On Unix, if the server is still running after timeout seconds, it will send SIGTERM. After another timeout, it escalates to SIGKILL.

Returns True if the server was stopped by any means, False if stopping it failed (on Windows).

Custom trait types.

class jupyter_server.traittypes.InstanceFromClasses(klasses=None, args=None, kw=None, **kwargs)#

Bases: ClassBasedTraitType

A trait whose value must be an instance of a class in a specified list of classes. The value can also be an instance of a subclass of the specified classes. Subclasses can declare default classes by overriding the klass attribute

default_value_repr()#

Get the default value repr.

from_string(s)#

Convert from a string.

info()#

Get the trait info.

instance_from_importable_klasses(value)#

Check that a given class is a subclasses found in the klasses list.

instance_init(obj)#

Initialize the trait.

make_dynamic_default()#

Make the dynamic default for the trait.

validate(obj, value)#

Validate an instance.

class jupyter_server.traittypes.TypeFromClasses(default_value=traitlets.Undefined, klasses=None, **kwargs)#

Bases: ClassBasedTraitType

A trait whose value must be a subclass of a class in a specified list of classes.

default_value_repr()#

The default value repr.

info()#

Returns a description of the trait.

instance_init(obj)#

Initialize an instance.

subclass_from_klasses(value)#

Check that a given class is a subclasses found in the klasses list.

validate(obj, value)#

Validates that the value is a valid object instance.

Translation related utilities. When imported, injects _ to builtins

Notebook related utilities

exception jupyter_server.utils.JupyterServerAuthWarning#

Bases: RuntimeWarning

Emitted when authentication configuration issue is detected.

Intended for filtering out expected warnings in tests, including downstream tests, rather than for users to silence this warning.

async jupyter_server.utils.async_fetch(urlstring, method='GET', body=None, headers=None, io_loop=None)#

Send an asynchronous HTTP, HTTPS, or HTTP+UNIX request to a Tornado Web Server. Returns a tornado HTTPResponse.

Return type:

HTTPResponse

jupyter_server.utils.check_pid(pid)#

Copy of IPython.utils.process.check_pid

Return type:

bool

jupyter_server.utils.check_version(v, check)#

check version string v >= check

If dev/prerelease tags result in TypeError for string-number comparison, it is assumed that the dependency is satisfied. Users on dev branches are responsible for keeping their own packages up to date.

Return type:

bool

jupyter_server.utils.expand_path(s)#

Expand $VARS and ~names in a string, like a shell

Examples:

In [2]: os.environ[‘FOO’]=’test’ In [3]: expand_path(‘variable FOO is $FOO’) Out[3]: ‘variable FOO is test’

Return type:

str

jupyter_server.utils.fetch(urlstring, method='GET', body=None, headers=None)#

Send a HTTP, HTTPS, or HTTP+UNIX request to a Tornado Web Server. Returns a tornado HTTPResponse.

Return type:

HTTPResponse

jupyter_server.utils.filefind(filename, path_dirs=None)#

Find a file by looking through a sequence of paths. This iterates through a sequence of paths looking for a file and returns the full, absolute path of the first occurrence of the file. If no set of path dirs is given, the filename is tested as is, after running through expandvars() and expanduser(). Thus a simple call:

filefind("myfile.txt")

will find the file in the current working dir, but:

filefind("~/myfile.txt")

Will find the file in the users home directory. This function does not automatically try any paths, such as the cwd or the user’s home directory.

Parameters:
  • filename (str) – The filename to look for.

  • path_dirs (str, None or sequence of str) – The sequence of paths to look for the file in. If None, the filename need to be absolute or be in the cwd. If a string, the string is put into a sequence and the searched. If a sequence, walk through each element and join with filename, calling expandvars() and expanduser() before testing for existence.

Return type:

Raises IOError or returns absolute path to file.

jupyter_server.utils.import_item(name)#

Import and return bar given the string foo.bar. Calling bar = import_item("foo.bar") is the functional equivalent of executing the code from foo import bar. :type name: str :param name: The fully qualified name of the module/package being imported. :type name: str

Returns:

mod – The module that was imported.

Return type:

module object

jupyter_server.utils.is_namespace_package(namespace)#

Is the provided namespace a Python Namespace Package (PEP420).

https://www.python.org/dev/peps/pep-0420/#specification

Returns None if module is not importable.

Return type:

bool | None

jupyter_server.utils.path2url(path)#

Convert a local file path to a URL

Return type:

str

async jupyter_server.utils.run_sync_in_loop(maybe_async)#

DEPRECATED: Use ensure_async from jupyter_core instead.

jupyter_server.utils.samefile_simple(path, other_path)#

Fill in for os.path.samefile when it is unavailable (Windows+py2).

Do a case-insensitive string comparison in this case plus comparing the full stat result (including times) because Windows + py2 doesn’t support the stat fields needed for identifying if it’s the same file (st_ino, st_dev).

Only to be used if os.path.samefile is not available.

Parameters:
  • path (str) – representing a path to a file

  • other_path (str) – representing a path to another file

Returns:

same

Return type:

Boolean that is True if both path and other path are the same

jupyter_server.utils.to_api_path(os_path, root='')#

Convert a filesystem path to an API path

If given, root will be removed from the path. root must be a filesystem path already.

Return type:

NewType()(ApiPath, str)

jupyter_server.utils.to_os_path(path, root='')#

Convert an API path to a filesystem path

If given, root will be prepended to the path. root must be a filesystem path already.

Return type:

str

jupyter_server.utils.unix_socket_in_use(socket_path)#

Checks whether a UNIX socket path on disk is in use by attempting to connect to it.

Return type:

bool

jupyter_server.utils.url2path(url)#

Convert a URL to a local file path

Return type:

str

jupyter_server.utils.url_escape(path)#

Escape special characters in a URL path

Turns ‘/foo bar/’ into ‘/foo%20bar/’

Return type:

str

jupyter_server.utils.url_is_absolute(url)#

Determine whether a given URL is absolute

Return type:

bool

jupyter_server.utils.url_path_join(*pieces)#

Join components of url into a relative url

Use to prevent double slash when joining subpath. This will leave the initial and final / in place

Return type:

str

jupyter_server.utils.url_unescape(path)#

Unescape special characters in a URL path

Turns ‘/foo%20bar/’ into ‘/foo bar/’

Return type:

str

jupyter_server.utils.urldecode_unix_socket_path(socket_path)#

Decodes a UNIX sock path string from an encoded sock path for the http+unix URI form.

Return type:

str

jupyter_server.utils.urlencode_unix_socket(socket_path)#

Encodes a UNIX socket URL from a socket path for the http+unix URI form.

Return type:

str

jupyter_server.utils.urlencode_unix_socket_path(socket_path)#

Encodes a UNIX socket path string from a socket path for the http+unix URI form.

Return type:

str

Module contents#

The Jupyter Server

class jupyter_server.CallContext#

Bases: object

CallContext essentially acts as a namespace for managing context variables.

Although not required, it is recommended that any “file-spanning” context variable names (i.e., variables that will be set or retrieved from multiple files or services) be added as constants to this class definition.

JUPYTER_HANDLER: str = 'JUPYTER_HANDLER'#

Provides access to the current request handler once set.

classmethod context_variable_names()#

Returns a list of variable names set for this call context.

Returns:

names – A list of variable names set for this call context.

Return type:

List[str]

classmethod get(name)#

Returns the value corresponding the named variable relative to this context.

If the named variable doesn’t exist, None will be returned.

Parameters:

name (str) – The name of the variable to get from the call context

Returns:

value – The value associated with the named variable for this call context

Return type:

Any

classmethod set(name, value)#

Sets the named variable to the specified value in the current call context.

Parameters:
  • name (str) – The name of the variable to store into the call context

  • value (Any) – The value of the variable to store into the call context

Return type:

None

Documentation for Contributors#

These pages target people who are interested in contributing directly to the Jupyter Server Project.

Team Meetings, Road Map and Calendar#

Many of the lead Jupyter Server developers meet weekly over Zoom. These meetings are open to everyone.

To see when the next meeting is happening and how to attend, watch this Github issue:

jupyter-server/team-compass#15

Meeting Notes#
Roadmap#

Also check out Jupyter Server’s roadmap where we track future plans for Jupyter Server:

Jupyter Server 2.0 Discussion

Archived roadmap

Jupyter Calendar#

General Jupyter contributor guidelines#

If you’re reading this section, you’re probably interested in contributing to Jupyter. Welcome and thanks for your interest in contributing!

Please take a look at the Contributor documentation, familiarize yourself with using the Jupyter Server, and introduce yourself on the mailing list and share what area of the project you are interested in working on.

For general documentation about contributing to Jupyter projects, see the Project Jupyter Contributor Documentation.

Setting Up a Development Environment#

Installing the Jupyter Server#

The development version of the server requires node and pip.

Once you have installed the dependencies mentioned above, use the following steps:

pip install --upgrade pip
git clone https://github.com/jupyter/jupyter_server
cd jupyter_server
pip install -e ".[test]"

If you are using a system-wide Python installation and you only want to install the server for you, you can add --user to the install commands.

Once you have done this, you can launch the main branch of Jupyter server from any directory in your system with:

jupyter server
Code Styling and Quality Checks#

jupyter_server has adopted automatic code formatting so you shouldn’t need to worry too much about your code style. As long as your code is valid, the pre-commit hook should take care of how it should look. pre-commit and its associated hooks will automatically be installed when you run pip install -e ".[test]"

To install pre-commit hook manually, run the following:

pre-commit install

You can invoke the pre-commit hook by hand at any time with:

pre-commit run

which should run any autoformatting on your code and tell you about any errors it couldn’t fix automatically. You may also install black integration into your text editor to format code automatically.

If you have already committed files before setting up the pre-commit hook with pre-commit install, you can fix everything up using pre-commit run --all-files. You need to make the fixing commit yourself after that.

Some of the hooks only run on CI by default, but you can invoke them by running with the --hook-stage manual argument.

There are three hatch scripts that can be run locally as well: hatch run lint:build will enforce styling. hatch run typing:test will run the type checker.

Troubleshooting the Installation#

If you do not see that your Jupyter Server is not running on dev mode, it’s possible that you are running other instances of Jupyter Server. You can try the following steps:

  1. Uninstall all instances of the jupyter_server package. These include any installations you made using pip or conda

  2. Run python -m pip install -e . in the jupyter_server repository to install the jupyter_server from there

  3. Run npm run build to make sure the Javascript and CSS are updated and compiled

  4. Launch with python -m jupyter_server --port 8989, and check that the browser is pointing to localhost:8989 (rather than the default 8888). You don’t necessarily have to launch with port 8989, as long as you use a port that is neither the default nor in use, then it should be fine.

  5. Verify the installation with the steps in the previous section.

Running Tests#

Install dependencies:

pip install -e .[test]
pip install -e examples/simple  # to test the examples

To run the Python tests, use:

pytest
pytest examples/simple  # to test the examples

You can also run the tests using hatch without installing test dependencies in your local environment:

pip install hatch
hatch run test:test

The command takes any argument that you can give to pytest, e.g.:

hatch run test:test -k name_of_method_to_test

You can also drop into a shell in the test environment by running:

hatch -e test shell

Building the Docs#

Install the docs requirements using pip:

pip install .[doc]

Once you have installed the required packages, you can build the docs with:

cd docs
make html

You can also run the tests using hatch without installing test dependencies in your local environment.

pip install hatch hatch run docs:build

You can also drop into a shell in the docs environment by running:

hatch -e docs shell

After that, the generated HTML files will be available at build/html/index.html. You may view the docs in your browser.

Windows users can find make.bat in the docs folder.

You should also have a look at the Project Jupyter Documentation Guide.

Other helpful documentation#

Frequently asked questions#

Here is a list of questions we think you might have. This list will always be growing, so please feel free to add your question+anwer to this page! 🚀

Can I configure multiple extensions at once?#

Checkout our “Operator” docs on how to configure extensions. 📕

Config file and command line options#

The Jupyter Server can be run with a variety of command line arguments. A list of available options can be found below in the options section.

Defaults for these options can also be set by creating a file named jupyter_server_config.py in your Jupyter folder. The Jupyter folder is in your home directory, ~/.jupyter.

To create a jupyter_server_config.py file, with all the defaults commented out, you can use the following command line:

$ jupyter server --generate-config
Options#

This list of options can be generated by running the following and hitting enter:

$ jupyter server --help-all
Application.log_datefmtUnicode

Default: '%Y-%m-%d %H:%M:%S'

The date format used by logging formatters for %(asctime)s

Application.log_formatUnicode

Default: '[%(name)s]%(highlevel)s %(message)s'

The Logging format template

Application.log_levelany of 0``|``10``|``20``|``30``|``40``|``50``|’DEBUG’|’INFO’|’WARN’|’ERROR’|’CRITICAL’``

Default: 30

Set the log level by value or name.

Application.logging_configDict

Default: {}

Configure additional log handlers.

The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings.

This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers.

If provided this should be a logging configuration dictionary, for more information see: https://docs.python.org/3/library/logging.config.html#logging-config-dictschema

This dictionary is merged with the base logging configuration which defines the following:

  • A logging formatter intended for interactive use called console.

  • A logging handler that writes to stderr called console which uses the formatter console.

  • A logger with the name of this application set to DEBUG level.

This example adds a new handler that writes to a file:

c.Application.logging_config = {
    "handlers": {
        "file": {
            "class": "logging.FileHandler",
            "level": "DEBUG",
            "filename": "<path/to/file>",
        }
    },
    "loggers": {
        "<application-name>": {
            "level": "DEBUG",
            # NOTE: if you don't list the default "console"
            # handler here then it will be disabled
            "handlers": ["console", "file"],
        },
    },
}
Application.show_configBool

Default: False

Instead of starting the Application, dump configuration to stdout

Application.show_config_jsonBool

Default: False

Instead of starting the Application, dump configuration to stdout (as JSON)

JupyterApp.answer_yesBool

Default: False

Answer yes to any prompts.

JupyterApp.config_fileUnicode

Default: ''

Full path of a config file.

JupyterApp.config_file_nameUnicode

Default: ''

Specify a config file to load.

JupyterApp.generate_configBool

Default: False

Generate default config file.

JupyterApp.log_datefmtUnicode

Default: '%Y-%m-%d %H:%M:%S'

The date format used by logging formatters for %(asctime)s

JupyterApp.log_formatUnicode

Default: '[%(name)s]%(highlevel)s %(message)s'

The Logging format template

JupyterApp.log_levelany of 0``|``10``|``20``|``30``|``40``|``50``|’DEBUG’|’INFO’|’WARN’|’ERROR’|’CRITICAL’``

Default: 30

Set the log level by value or name.

JupyterApp.logging_configDict

Default: {}

Configure additional log handlers.

The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings.

This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers.

If provided this should be a logging configuration dictionary, for more information see: https://docs.python.org/3/library/logging.config.html#logging-config-dictschema

This dictionary is merged with the base logging configuration which defines the following:

  • A logging formatter intended for interactive use called console.

  • A logging handler that writes to stderr called console which uses the formatter console.

  • A logger with the name of this application set to DEBUG level.

This example adds a new handler that writes to a file:

c.Application.logging_config = {
    "handlers": {
        "file": {
            "class": "logging.FileHandler",
            "level": "DEBUG",
            "filename": "<path/to/file>",
        }
    },
    "loggers": {
        "<application-name>": {
            "level": "DEBUG",
            # NOTE: if you don't list the default "console"
            # handler here then it will be disabled
            "handlers": ["console", "file"],
        },
    },
}
JupyterApp.show_configBool

Default: False

Instead of starting the Application, dump configuration to stdout

JupyterApp.show_config_jsonBool

Default: False

Instead of starting the Application, dump configuration to stdout (as JSON)

ServerApp.allow_credentialsBool

Default: False

Set the Access-Control-Allow-Credentials: true header

ServerApp.allow_external_kernelsBool

Default: False

Whether or not to allow external kernels, whose connection files are placed in external_connection_dir.

ServerApp.allow_originUnicode

Default: ''

Set the Access-Control-Allow-Origin header

Use ‘*’ to allow any origin to access your server.

Takes precedence over allow_origin_pat.

ServerApp.allow_origin_patUnicode

Default: ''

Use a regular expression for the Access-Control-Allow-Origin header

Requests from an origin matching the expression will get replies with:

Access-Control-Allow-Origin: origin

where origin is the origin of the request.

Ignored if allow_origin is set.

ServerApp.allow_password_changeBool

Default: True

DEPRECATED in 2.0. Use PasswordIdentityProvider.allow_password_change

ServerApp.allow_remote_accessBool

Default: False

Allow requests where the Host header doesn’t point to a local server

By default, requests get a 403 forbidden response if the ‘Host’ header shows that the browser thinks it’s on a non-local domain. Setting this option to True disables this check.

This protects against ‘DNS rebinding’ attacks, where a remote web server serves you a page and then changes its DNS to send later requests to a local IP, bypassing same-origin checks.

Local IP addresses (such as 127.0.0.1 and ::1) are allowed as local, along with hostnames configured in local_hostnames.

ServerApp.allow_rootBool

Default: False

Whether to allow the user to run the server as root.

ServerApp.allow_unauthenticated_accessBool

Default: True

Allow unauthenticated access to endpoints without authentication rule.

When set to True (default in jupyter-server 2.0, subject to change in the future), any request to an endpoint without an authentication rule (either @tornado.web.authenticated, or @allow_unauthenticated) will be permitted, regardless of whether user has logged in or not.

When set to False, logging in will be required for access to each endpoint, excluding the endpoints marked with @allow_unauthenticated decorator.

This option can be configured using JUPYTER_SERVER_ALLOW_UNAUTHENTICATED_ACCESS environment variable: any non-empty value other than “true” and “yes” will prevent unauthenticated access to endpoints without @allow_unauthenticated.

ServerApp.answer_yesBool

Default: False

Answer yes to any prompts.

ServerApp.authenticate_prometheusBool

Default: True

Require authentication to access prometheus metrics.

ServerApp.authorizer_classType

Default: 'jupyter_server.auth.authorizer.AllowAllAuthorizer'

The authorizer class to use.

ServerApp.autoreloadBool

Default: False

Reload the webapp when changes are made to any Python src files.

ServerApp.base_urlUnicode

Default: '/'

The base URL for the Jupyter server.

Leading and trailing slashes can be omitted, and will automatically be added.

ServerApp.browserUnicode

Default: ''

Specify what command to use to invoke a web

browser when starting the server. If not specified, the default browser will be determined by the webbrowser standard library module, which allows setting of the BROWSER environment variable to override it.

ServerApp.certfileUnicode

Default: ''

The full path to an SSL/TLS certificate file.

ServerApp.client_caUnicode

Default: ''

The full path to a certificate authority certificate for SSL/TLS client authentication.

ServerApp.config_fileUnicode

Default: ''

Full path of a config file.

ServerApp.config_file_nameUnicode

Default: ''

Specify a config file to load.

ServerApp.config_manager_classType

Default: 'jupyter_server.services.config.manager.ConfigManager'

The config manager class to use

ServerApp.contents_manager_classType

Default: 'jupyter_server.services.contents.largefilemanager.AsyncLarge...

The content manager class to use.

ServerApp.cookie_optionsDict

Default: {}

DEPRECATED. Use IdentityProvider.cookie_options

ServerApp.cookie_secretBytes

Default: b''

The random bytes used to secure cookies.

By default this is a new random number every time you start the server. Set it to a value in a config file to enable logins to persist across server sessions.

Note: Cookie secrets should be kept private, do not share config files with cookie_secret stored in plaintext (you can read the value from a file).

ServerApp.cookie_secret_fileUnicode

Default: ''

The file where the cookie secret is stored.

ServerApp.custom_display_urlUnicode

Default: ''

Override URL shown to users.

Replace actual URL, including protocol, address, port and base URL, with the given value when displaying URL to the users. Do not change the actual connection URL. If authentication token is enabled, the token is added to the custom URL automatically.

This option is intended to be used when the URL to display to the user cannot be determined reliably by the Jupyter server (proxified or containerized setups for example).

ServerApp.default_urlUnicode

Default: '/'

The default URL to redirect to from /

ServerApp.disable_check_xsrfBool

Default: False

Disable cross-site-request-forgery protection

Jupyter server includes protection from cross-site request forgeries, requiring API requests to either:

  • originate from pages served by this server (validated with XSRF cookie and token), or

  • authenticate with a token

Some anonymous compute resources still desire the ability to run code, completely without authentication. These services can disable all authentication and security checks, with the full knowledge of what that implies.

ServerApp.external_connection_dirUnicode

Default: None

The directory to look at for external kernel connection files, if allow_external_kernels is True. Defaults to Jupyter runtime_dir/external_kernels. Make sure that this directory is not filled with left-over connection files, that could result in unnecessary kernel manager creations.

ServerApp.extra_servicesList

Default: []

handlers that should be loaded at higher priority than the default services

ServerApp.extra_static_pathsList

Default: []

Extra paths to search for serving static files.

This allows adding javascript/css to be available from the Jupyter server machine, or overriding individual files in the IPython

ServerApp.extra_template_pathsList

Default: []

Extra paths to search for serving jinja templates.

Can be used to override templates from jupyter_server.templates.

ServerApp.file_to_runUnicode

Default: ''

Open the named file when the application is launched.

ServerApp.file_url_prefixUnicode

Default: 'notebooks'

The URL prefix where files are opened directly.

ServerApp.generate_configBool

Default: False

Generate default config file.

ServerApp.get_secure_cookie_kwargsDict

Default: {}

DEPRECATED. Use IdentityProvider.get_secure_cookie_kwargs

ServerApp.identity_provider_classType

Default: 'jupyter_server.auth.identity.PasswordIdentityProvider'

The identity provider class to use.

ServerApp.iopub_data_rate_limitFloat

Default: 0.0

DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_data_rate_limit

ServerApp.iopub_msg_rate_limitFloat

Default: 0.0

DEPRECATED. Use ZMQChannelsWebsocketConnection.iopub_msg_rate_limit

ServerApp.ipUnicode

Default: 'localhost'

The IP address the Jupyter server will listen on.

ServerApp.jinja_environment_optionsDict

Default: {}

Supply extra arguments that will be passed to Jinja environment.

ServerApp.jinja_template_varsDict

Default: {}

Extra variables to supply to jinja templates when rendering.

ServerApp.jpserver_extensionsDict

Default: {}

Dict of Python modules to load as Jupyter server extensions.Entry values can be used to enable and disable the loading ofthe extensions. The extensions will be loaded in alphabetical order.

ServerApp.kernel_manager_classType

Default: 'jupyter_server.services.kernels.kernelmanager.MappingKernelM...

The kernel manager class to use.

ServerApp.kernel_spec_manager_classType

Default: 'builtins.object'

The kernel spec manager class to use. Should be a subclass of jupyter_client.kernelspec.KernelSpecManager.

The Api of KernelSpecManager is provisional and might change without warning between this version of Jupyter and the next stable one.

ServerApp.kernel_websocket_connection_classType

Default: 'jupyter_server.services.kernels.connection.base.BaseKernelWe...

The kernel websocket connection class to use.

ServerApp.kernel_ws_protocolUnicode

Default: ''

DEPRECATED. Use ZMQChannelsWebsocketConnection.kernel_ws_protocol

ServerApp.keyfileUnicode

Default: ''

The full path to a private key file for usage with SSL/TLS.

ServerApp.limit_rateBool

Default: False

DEPRECATED. Use ZMQChannelsWebsocketConnection.limit_rate

ServerApp.local_hostnamesList

Default: ['localhost']

Hostnames to allow as local when allow_remote_access is False.

Local IP addresses (such as 127.0.0.1 and ::1) are automatically accepted as local as well.

ServerApp.log_datefmtUnicode

Default: '%Y-%m-%d %H:%M:%S'

The date format used by logging formatters for %(asctime)s

ServerApp.log_formatUnicode

Default: '[%(name)s]%(highlevel)s %(message)s'

The Logging format template

ServerApp.log_levelany of 0``|``10``|``20``|``30``|``40``|``50``|’DEBUG’|’INFO’|’WARN’|’ERROR’|’CRITICAL’``

Default: 30

Set the log level by value or name.

ServerApp.logging_configDict

Default: {}

Configure additional log handlers.

The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings.

This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers.

If provided this should be a logging configuration dictionary, for more information see: https://docs.python.org/3/library/logging.config.html#logging-config-dictschema

This dictionary is merged with the base logging configuration which defines the following:

  • A logging formatter intended for interactive use called console.

  • A logging handler that writes to stderr called console which uses the formatter console.

  • A logger with the name of this application set to DEBUG level.

This example adds a new handler that writes to a file:

c.Application.logging_config = {
    "handlers": {
        "file": {
            "class": "logging.FileHandler",
            "level": "DEBUG",
            "filename": "<path/to/file>",
        }
    },
    "loggers": {
        "<application-name>": {
            "level": "DEBUG",
            # NOTE: if you don't list the default "console"
            # handler here then it will be disabled
            "handlers": ["console", "file"],
        },
    },
}
ServerApp.login_handler_classType

Default: 'jupyter_server.auth.login.LegacyLoginHandler'

The login handler class to use.

ServerApp.logout_handler_classType

Default: 'jupyter_server.auth.logout.LogoutHandler'

The logout handler class to use.

ServerApp.max_body_sizeInt

Default: 536870912

Sets the maximum allowed size of the client request body, specified in the Content-Length request header field. If the size in a request exceeds the configured value, a malformed HTTP message is returned to the client.

Note: max_body_size is applied even in streaming mode.

ServerApp.max_buffer_sizeInt

Default: 536870912

Gets or sets the maximum amount of memory, in bytes, that is allocated for use by the buffer manager.

ServerApp.min_open_files_limitInt

Default: 0

Gets or sets a lower bound on the open file handles process resource limit. This may need to be increased if you run into an OSError: [Errno 24] Too many open files. This is not applicable when running on Windows.

ServerApp.notebook_dirUnicode

Default: ''

DEPRECATED, use root_dir.

ServerApp.open_browserBool

Default: False

Whether to open in a browser after starting.

The specific browser used is platform dependent and determined by the python standard library webbrowser module, unless it is overridden using the –browser (ServerApp.browser) configuration option.

ServerApp.passwordUnicode

Default: ''

DEPRECATED in 2.0. Use PasswordIdentityProvider.hashed_password

ServerApp.password_requiredBool

Default: False

DEPRECATED in 2.0. Use PasswordIdentityProvider.password_required

ServerApp.portInt

Default: 0

The port the server will listen on (env: JUPYTER_PORT).

ServerApp.port_retriesInt

Default: 50

The number of additional ports to try if the specified port is not available (env: JUPYTER_PORT_RETRIES).

ServerApp.preferred_dirUnicode

Default: ''

Preferred starting directory to use for notebooks and kernels. ServerApp.preferred_dir is deprecated in jupyter-server 2.0. Use FileContentsManager.preferred_dir instead

ServerApp.pylabUnicode

Default: 'disabled'

DISABLED: use %pylab or %matplotlib in the notebook to enable matplotlib.

ServerApp.quit_buttonBool

Default: True

If True, display controls to shut down the Jupyter server, such as menu items or buttons.

ServerApp.rate_limit_windowFloat

Default: 0.0

DEPRECATED. Use ZMQChannelsWebsocketConnection.rate_limit_window

ServerApp.reraise_server_extension_failuresBool

Default: False

Reraise exceptions encountered loading server extensions?

ServerApp.root_dirUnicode

Default: ''

The directory to use for notebooks and kernels.

ServerApp.session_manager_classType

Default: 'builtins.object'

The session manager class to use.

ServerApp.show_configBool

Default: False

Instead of starting the Application, dump configuration to stdout

ServerApp.show_config_jsonBool

Default: False

Instead of starting the Application, dump configuration to stdout (as JSON)

ServerApp.shutdown_no_activity_timeoutInt

Default: 0

Shut down the server after N seconds with no kernelsrunning and no activity. This can be used together with culling idle kernels (MappingKernelManager.cull_idle_timeout) to shutdown the Jupyter server when it’s not in use. This is not precisely timed: it may shut down up to a minute later. 0 (the default) disables this automatic shutdown.

ServerApp.sockUnicode

Default: ''

The UNIX socket the Jupyter server will listen on.

ServerApp.sock_modeUnicode

Default: '0600'

The permissions mode for UNIX socket creation (default: 0600).

ServerApp.ssl_optionsDict

Default: {}

Supply SSL options for the tornado HTTPServer.

See the tornado docs for details.

ServerApp.static_immutable_cacheList

Default: []

Paths to set up static files as immutable.

This allow setting up the cache control of static files as immutable. It should be used for static file named with a hash for instance.

ServerApp.terminado_settingsDict

Default: {}

Supply overrides for terminado. Currently only supports “shell_command”.

ServerApp.terminals_enabledBool

Default: False

Set to False to disable terminals.

This does not make the server more secure by itself. Anything the user can in a terminal, they can also do in a notebook.

Terminals may also be automatically disabled if the terminado package is not available.

ServerApp.tokenUnicode

Default: '<DEPRECATED>'

DEPRECATED. Use IdentityProvider.token

ServerApp.tornado_settingsDict

Default: {}

Supply overrides for the tornado.web.Application that the Jupyter server uses.

ServerApp.trust_xheadersBool

Default: False

Whether to trust or not X-Scheme/X-Forwarded-Proto and X-Real-Ip/X-Forwarded-For headerssent by the upstream reverse proxy. Necessary if the proxy handles SSL

ServerApp.use_redirect_fileBool

Default: True

Disable launching browser by redirect file

For versions of notebook > 5.7.2, a security feature measure was added that prevented the authentication token used to launch the browser from being visible. This feature makes it difficult for other users on a multi-user system from running code in your Jupyter session as you. However, some environments (like Windows Subsystem for Linux (WSL) and Chromebooks), launching a browser using a redirect file can lead the browser failing to load. This is because of the difference in file structures/paths between the runtime and the browser.

Disabling this setting to False will disable this behavior, allowing the browser to launch by using a URL and visible token (as before).

ServerApp.webbrowser_open_newInt

Default: 2

Specify where to open the server on startup. This is the

new argument passed to the standard library method webbrowser.open. The behaviour is not guaranteed, but depends on browser support. Valid values are:

  • 2 opens a new tab,

  • 1 opens a new window,

  • 0 opens in an existing window.

See the webbrowser.open documentation for details.

ServerApp.websocket_compression_optionsAny

Default: None

Set the tornado compression options for websocket connections.

This value will be returned from WebSocketHandler.get_compression_options(). None (default) will disable compression. A dict (even an empty one) will enable compression.

See the tornado docs for WebSocketHandler.get_compression_options for details.

ServerApp.websocket_ping_intervalInt

Default: 0

Configure the websocket ping interval in seconds.

Websockets are long-lived connections that are used by some Jupyter Server extensions.

Periodic pings help to detect disconnected clients and keep the connection active. If this is set to None, then no pings will be performed.

When a ping is sent, the client has websocket_ping_timeout seconds to respond. If no response is received within this period, the connection will be closed from the server side.

ServerApp.websocket_ping_timeoutInt

Default: 0

Configure the websocket ping timeout in seconds.

See websocket_ping_interval for details.

ServerApp.websocket_urlUnicode

Default: ''

The base URL for websockets,

if it differs from the HTTP server (hint: it almost certainly doesn’t).

Should be in the form of an HTTP origin: ws[s]://hostname[:port]

ConnectionFileMixin.connection_fileUnicode

Default: ''

JSON file in which to store connection info [default: kernel-<pid>.json]

This file will contain the IP, ports, and authentication key needed to connect clients to this kernel. By default, this file will be created in the security dir of the current profile, but can be specified by absolute path.

ConnectionFileMixin.control_portInt

Default: 0

set the control (ROUTER) port [default: random]

ConnectionFileMixin.hb_portInt

Default: 0

set the heartbeat port [default: random]

ConnectionFileMixin.iopub_portInt

Default: 0

set the iopub (PUB) port [default: random]

ConnectionFileMixin.ipUnicode

Default: ''

Set the kernel’s IP address [default localhost].

If the IP address is something other than localhost, then Consoles on other machines will be able to connect to the Kernel, so be careful!

ConnectionFileMixin.shell_portInt

Default: 0

set the shell (ROUTER) port [default: random]

ConnectionFileMixin.stdin_portInt

Default: 0

set the stdin (ROUTER) port [default: random]

ConnectionFileMixin.transportany of 'tcp'``|’ipc’`` (case-insensitive)

Default: 'tcp'

No description

KernelManager.autorestartBool

Default: True

Should we autorestart the kernel if it dies.

KernelManager.cache_portsBool

Default: False

True if the MultiKernelManager should cache ports for this KernelManager instance

KernelManager.connection_fileUnicode

Default: ''

JSON file in which to store connection info [default: kernel-<pid>.json]

This file will contain the IP, ports, and authentication key needed to connect clients to this kernel. By default, this file will be created in the security dir of the current profile, but can be specified by absolute path.

KernelManager.control_portInt

Default: 0

set the control (ROUTER) port [default: random]

KernelManager.hb_portInt

Default: 0

set the heartbeat port [default: random]

KernelManager.iopub_portInt

Default: 0

set the iopub (PUB) port [default: random]

KernelManager.ipUnicode

Default: ''

Set the kernel’s IP address [default localhost].

If the IP address is something other than localhost, then Consoles on other machines will be able to connect to the Kernel, so be careful!

KernelManager.shell_portInt

Default: 0

set the shell (ROUTER) port [default: random]

KernelManager.shutdown_wait_timeFloat

Default: 5.0

Time to wait for a kernel to terminate before killing it, in seconds. When a shutdown request is initiated, the kernel will be immediately sent an interrupt (SIGINT), followedby a shutdown_request message, after 1/2 of shutdown_wait_time`it will be sent a terminate (SIGTERM) request, and finally at the end of `shutdown_wait_time will be killed (SIGKILL). terminate and kill may be equivalent on windows. Note that this value can beoverridden by the in-use kernel provisioner since shutdown times mayvary by provisioned environment.

KernelManager.stdin_portInt

Default: 0

set the stdin (ROUTER) port [default: random]

KernelManager.transportany of 'tcp'``|’ipc’`` (case-insensitive)

Default: 'tcp'

No description

Session.buffer_thresholdInt

Default: 1024

Threshold (in bytes) beyond which an object’s buffer should be extracted to avoid pickling.

Session.check_pidBool

Default: True

Whether to check PID to protect against calls after fork.

This check can be disabled if fork-safety is handled elsewhere.

Session.copy_thresholdInt

Default: 65536

Threshold (in bytes) beyond which a buffer should be sent without copying.

Session.debugBool

Default: False

Debug output in the Session

Session.digest_history_sizeInt

Default: 65536

The maximum number of digests to remember.

The digest history will be culled when it exceeds this value.

Session.item_thresholdInt

Default: 64

The maximum number of items for a container to be introspected for custom serialization.

Containers larger than this are pickled outright.

Session.keyCBytes

Default: b''

execution key, for signing messages.

Session.keyfileUnicode

Default: ''

path to file containing execution key.

Session.metadataDict

Default: {}

Metadata dictionary, which serves as the default top-level metadata dict for each message.

Session.packerDottedObjectName

Default: 'json'

The name of the packer for serializing messages.

Should be one of ‘json’, ‘pickle’, or an import name for a custom callable serializer.

Session.sessionCUnicode

Default: ''

The UUID identifying this session.

Session.signature_schemeUnicode

Default: 'hmac-sha256'

The digest scheme used to construct the message signatures.

Must have the form ‘hmac-HASH’.

Session.unpackerDottedObjectName

Default: 'json'

The name of the unpacker for unserializing messages.

Only used with custom functions for packer.

Session.usernameUnicode

Default: 'username'

Username for the Session. Default is your system username.

MultiKernelManager.default_kernel_nameUnicode

Default: 'python3'

The name of the default kernel to start

MultiKernelManager.kernel_manager_classDottedObjectName

Default: 'jupyter_client.ioloop.IOLoopKernelManager'

The kernel manager class. This is configurable to allow

subclassing of the KernelManager for customized behavior.

MultiKernelManager.shared_contextBool

Default: True

Share a single zmq.Context to talk to all my kernels

MappingKernelManager.allow_tracebacksBool

Default: True

Whether to send tracebacks to clients on exceptions.

MappingKernelManager.allowed_message_typesList

Default: []

White list of allowed kernel message types.

When the list is empty, all message types are allowed.

MappingKernelManager.buffer_offline_messagesBool

Default: True

Whether messages from kernels whose frontends have disconnected should be buffered in-memory.

When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.

Disable if long-running kernels will produce too much output while no frontends are connected.

MappingKernelManager.cull_busyBool

Default: False

Whether to consider culling kernels which are busy.

Only effective if cull_idle_timeout > 0.

MappingKernelManager.cull_connectedBool

Default: False

Whether to consider culling kernels which have one or more connections.

Only effective if cull_idle_timeout > 0.

MappingKernelManager.cull_idle_timeoutInt

Default: 0

Timeout (in seconds) after which a kernel is considered idle and ready to be culled.

Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.

MappingKernelManager.cull_intervalInt

Default: 300

The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.

MappingKernelManager.default_kernel_nameUnicode

Default: 'python3'

The name of the default kernel to start

MappingKernelManager.kernel_info_timeoutFloat

Default: 60

Timeout for giving up on a kernel (in seconds).

On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).

MappingKernelManager.kernel_manager_classDottedObjectName

Default: 'jupyter_client.ioloop.IOLoopKernelManager'

The kernel manager class. This is configurable to allow

subclassing of the KernelManager for customized behavior.

MappingKernelManager.root_dirUnicode

Default: ''

No description

MappingKernelManager.shared_contextBool

Default: True

Share a single zmq.Context to talk to all my kernels

MappingKernelManager.traceback_replacement_messageUnicode

Default: 'An exception occurred at runtime, which is not shown due to ...

Message to print when allow_tracebacks is False, and an exception occurs

KernelSpecManager.allowed_kernelspecsSet

Default: set()

List of allowed kernel names.

By default, all installed kernels are allowed.

KernelSpecManager.ensure_native_kernelBool

Default: True

If there is no Python kernelspec registered and the IPython

kernel is available, ensure it is added to the spec list.

KernelSpecManager.kernel_spec_classType

Default: 'jupyter_client.kernelspec.KernelSpec'

The kernel spec class. This is configurable to allow

subclassing of the KernelSpecManager for customized behavior.

KernelSpecManager.whitelistSet

Default: set()

Deprecated, use KernelSpecManager.allowed_kernelspecs

AsyncMultiKernelManager.default_kernel_nameUnicode

Default: 'python3'

The name of the default kernel to start

AsyncMultiKernelManager.kernel_manager_classDottedObjectName

Default: 'jupyter_client.ioloop.AsyncIOLoopKernelManager'

The kernel manager class. This is configurable to allow

subclassing of the AsyncKernelManager for customized behavior.

AsyncMultiKernelManager.shared_contextBool

Default: True

Share a single zmq.Context to talk to all my kernels

AsyncMultiKernelManager.use_pending_kernelsBool

Default: False

Whether to make kernels available before the process has started. The

kernel has a .ready future which can be awaited before connecting

AsyncMappingKernelManager.allow_tracebacksBool

Default: True

Whether to send tracebacks to clients on exceptions.

AsyncMappingKernelManager.allowed_message_typesList

Default: []

White list of allowed kernel message types.

When the list is empty, all message types are allowed.

AsyncMappingKernelManager.buffer_offline_messagesBool

Default: True

Whether messages from kernels whose frontends have disconnected should be buffered in-memory.

When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.

Disable if long-running kernels will produce too much output while no frontends are connected.

AsyncMappingKernelManager.cull_busyBool

Default: False

Whether to consider culling kernels which are busy.

Only effective if cull_idle_timeout > 0.

AsyncMappingKernelManager.cull_connectedBool

Default: False

Whether to consider culling kernels which have one or more connections.

Only effective if cull_idle_timeout > 0.

AsyncMappingKernelManager.cull_idle_timeoutInt

Default: 0

Timeout (in seconds) after which a kernel is considered idle and ready to be culled.

Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.

AsyncMappingKernelManager.cull_intervalInt

Default: 300

The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.

AsyncMappingKernelManager.default_kernel_nameUnicode

Default: 'python3'

The name of the default kernel to start

AsyncMappingKernelManager.kernel_info_timeoutFloat

Default: 60

Timeout for giving up on a kernel (in seconds).

On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).

AsyncMappingKernelManager.kernel_manager_classDottedObjectName

Default: 'jupyter_client.ioloop.AsyncIOLoopKernelManager'

The kernel manager class. This is configurable to allow

subclassing of the AsyncKernelManager for customized behavior.

AsyncMappingKernelManager.root_dirUnicode

Default: ''

No description

AsyncMappingKernelManager.shared_contextBool

Default: True

Share a single zmq.Context to talk to all my kernels

AsyncMappingKernelManager.traceback_replacement_messageUnicode

Default: 'An exception occurred at runtime, which is not shown due to ...

Message to print when allow_tracebacks is False, and an exception occurs

AsyncMappingKernelManager.use_pending_kernelsBool

Default: False

Whether to make kernels available before the process has started. The

kernel has a .ready future which can be awaited before connecting

ContentsManager.allow_hiddenBool

Default: False

Allow access to hidden files

ContentsManager.checkpointsInstance

Default: None

No description

ContentsManager.checkpoints_classType

Default: 'jupyter_server.services.contents.checkpoints.Checkpoints'

No description

ContentsManager.checkpoints_kwargsDict

Default: {}

No description

ContentsManager.event_loggerInstance

Default: None

No description

ContentsManager.files_handler_classType

Default: 'jupyter_server.files.handlers.FilesHandler'

handler class to use when serving raw file requests.

Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.

Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.

Access to these files should be Authenticated.

ContentsManager.files_handler_paramsDict

Default: {}

Extra parameters to pass to files_handler_class.

For example, StaticFileHandlers generally expect a path argument specifying the root directory from which to serve files.

ContentsManager.hide_globsList

Default: ['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...

Glob patterns to hide in file and directory listings.

ContentsManager.post_save_hookAny

Default: None

Python callable or importstring thereof

to be called on the path of a file just saved.

This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.

It will be called as (all arguments passed by keyword):

hook(os_path=os_path, model=model, contents_manager=instance)
  • path: the filesystem path to the file just written

  • model: the model representing the file

  • contents_manager: this ContentsManager instance

ContentsManager.pre_save_hookAny

Default: None

Python callable or importstring thereof

To be called on a contents model prior to save.

This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.

It will be called as (all arguments passed by keyword):

hook(path=path, model=model, contents_manager=self)
  • model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.

  • path: the API path of the save destination

  • contents_manager: this ContentsManager instance

ContentsManager.preferred_dirUnicode

Default: ''

Preferred starting directory to use for notebooks. This is an API path (/ separated, relative to root dir)

ContentsManager.root_dirUnicode

Default: '/'

No description

ContentsManager.untitled_directoryUnicode

Default: 'Untitled Folder'

The base name used when creating untitled directories.

ContentsManager.untitled_fileUnicode

Default: 'untitled'

The base name used when creating untitled files.

ContentsManager.untitled_notebookUnicode

Default: 'Untitled'

The base name used when creating untitled notebooks.

FileManagerMixin.hash_algorithmany of 'sha3_256'``|’shake_128’|’sha512’|’sha384’|’sha512_224’|’shake_256’|’sha3_512’|’sha3_224’|’sha256’|’md5-sha1’|’blake2b’|’sha224’|’sha3_384’|’sm3’|’sha1’|’blake2s’|’sha512_256’|’md5’``

Default: 'sha256'

Hash algorithm to use for file content, support by hashlib

FileManagerMixin.use_atomic_writingBool

Default: True

By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.

This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )

FileContentsManager.allow_hiddenBool

Default: False

Allow access to hidden files

FileContentsManager.always_delete_dirBool

Default: False

If True, deleting a non-empty directory will always be allowed.

WARNING this may result in files being permanently removed; e.g. on Windows, if the data size is too big for the trash/recycle bin the directory will be permanently deleted. If False (default), the non-empty directory will be sent to the trash only if safe. And if delete_to_trash is True, the directory won’t be deleted.

FileContentsManager.checkpointsInstance

Default: None

No description

FileContentsManager.checkpoints_classType

Default: 'jupyter_server.services.contents.checkpoints.Checkpoints'

No description

FileContentsManager.checkpoints_kwargsDict

Default: {}

No description

FileContentsManager.delete_to_trashBool

Default: True

If True (default), deleting files will send them to the

platform’s trash/recycle bin, where they can be recovered. If False, deleting files really deletes them.

FileContentsManager.event_loggerInstance

Default: None

No description

FileContentsManager.files_handler_classType

Default: 'jupyter_server.files.handlers.FilesHandler'

handler class to use when serving raw file requests.

Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.

Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.

Access to these files should be Authenticated.

FileContentsManager.files_handler_paramsDict

Default: {}

Extra parameters to pass to files_handler_class.

For example, StaticFileHandlers generally expect a path argument specifying the root directory from which to serve files.

FileContentsManager.hash_algorithmany of 'sha3_256'``|’shake_128’|’sha512’|’sha384’|’sha512_224’|’shake_256’|’sha3_512’|’sha3_224’|’sha256’|’md5-sha1’|’blake2b’|’sha224’|’sha3_384’|’sm3’|’sha1’|’blake2s’|’sha512_256’|’md5’``

Default: 'sha256'

Hash algorithm to use for file content, support by hashlib

FileContentsManager.hide_globsList

Default: ['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...

Glob patterns to hide in file and directory listings.

FileContentsManager.max_copy_folder_size_mbInt

Default: 500

The max folder size that can be copied

FileContentsManager.post_save_hookAny

Default: None

Python callable or importstring thereof

to be called on the path of a file just saved.

This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.

It will be called as (all arguments passed by keyword):

hook(os_path=os_path, model=model, contents_manager=instance)
  • path: the filesystem path to the file just written

  • model: the model representing the file

  • contents_manager: this ContentsManager instance

FileContentsManager.pre_save_hookAny

Default: None

Python callable or importstring thereof

To be called on a contents model prior to save.

This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.

It will be called as (all arguments passed by keyword):

hook(path=path, model=model, contents_manager=self)
  • model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.

  • path: the API path of the save destination

  • contents_manager: this ContentsManager instance

FileContentsManager.preferred_dirUnicode

Default: ''

Preferred starting directory to use for notebooks. This is an API path (/ separated, relative to root dir)

FileContentsManager.root_dirUnicode

Default: ''

No description

FileContentsManager.untitled_directoryUnicode

Default: 'Untitled Folder'

The base name used when creating untitled directories.

FileContentsManager.untitled_fileUnicode

Default: 'untitled'

The base name used when creating untitled files.

FileContentsManager.untitled_notebookUnicode

Default: 'Untitled'

The base name used when creating untitled notebooks.

FileContentsManager.use_atomic_writingBool

Default: True

By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.

This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )

AsyncContentsManager.allow_hiddenBool

Default: False

Allow access to hidden files

AsyncContentsManager.checkpointsInstance

Default: None

No description

AsyncContentsManager.checkpoints_classType

Default: 'jupyter_server.services.contents.checkpoints.AsyncCheckpoints'

No description

AsyncContentsManager.checkpoints_kwargsDict

Default: {}

No description

AsyncContentsManager.event_loggerInstance

Default: None

No description

AsyncContentsManager.files_handler_classType

Default: 'jupyter_server.files.handlers.FilesHandler'

handler class to use when serving raw file requests.

Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.

Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.

Access to these files should be Authenticated.

AsyncContentsManager.files_handler_paramsDict

Default: {}

Extra parameters to pass to files_handler_class.

For example, StaticFileHandlers generally expect a path argument specifying the root directory from which to serve files.

AsyncContentsManager.hide_globsList

Default: ['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...

Glob patterns to hide in file and directory listings.

AsyncContentsManager.post_save_hookAny

Default: None

Python callable or importstring thereof

to be called on the path of a file just saved.

This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.

It will be called as (all arguments passed by keyword):

hook(os_path=os_path, model=model, contents_manager=instance)
  • path: the filesystem path to the file just written

  • model: the model representing the file

  • contents_manager: this ContentsManager instance

AsyncContentsManager.pre_save_hookAny

Default: None

Python callable or importstring thereof

To be called on a contents model prior to save.

This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.

It will be called as (all arguments passed by keyword):

hook(path=path, model=model, contents_manager=self)
  • model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.

  • path: the API path of the save destination

  • contents_manager: this ContentsManager instance

AsyncContentsManager.preferred_dirUnicode

Default: ''

Preferred starting directory to use for notebooks. This is an API path (/ separated, relative to root dir)

AsyncContentsManager.root_dirUnicode

Default: '/'

No description

AsyncContentsManager.untitled_directoryUnicode

Default: 'Untitled Folder'

The base name used when creating untitled directories.

AsyncContentsManager.untitled_fileUnicode

Default: 'untitled'

The base name used when creating untitled files.

AsyncContentsManager.untitled_notebookUnicode

Default: 'Untitled'

The base name used when creating untitled notebooks.

AsyncFileManagerMixin.hash_algorithmany of 'sha3_256'``|’shake_128’|’sha512’|’sha384’|’sha512_224’|’shake_256’|’sha3_512’|’sha3_224’|’sha256’|’md5-sha1’|’blake2b’|’sha224’|’sha3_384’|’sm3’|’sha1’|’blake2s’|’sha512_256’|’md5’``

Default: 'sha256'

Hash algorithm to use for file content, support by hashlib

AsyncFileManagerMixin.use_atomic_writingBool

Default: True

By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.

This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )

AsyncFileContentsManager.allow_hiddenBool

Default: False

Allow access to hidden files

AsyncFileContentsManager.always_delete_dirBool

Default: False

If True, deleting a non-empty directory will always be allowed.

WARNING this may result in files being permanently removed; e.g. on Windows, if the data size is too big for the trash/recycle bin the directory will be permanently deleted. If False (default), the non-empty directory will be sent to the trash only if safe. And if delete_to_trash is True, the directory won’t be deleted.

AsyncFileContentsManager.checkpointsInstance

Default: None

No description

AsyncFileContentsManager.checkpoints_classType

Default: 'jupyter_server.services.contents.checkpoints.AsyncCheckpoints'

No description

AsyncFileContentsManager.checkpoints_kwargsDict

Default: {}

No description

AsyncFileContentsManager.delete_to_trashBool

Default: True

If True (default), deleting files will send them to the

platform’s trash/recycle bin, where they can be recovered. If False, deleting files really deletes them.

AsyncFileContentsManager.event_loggerInstance

Default: None

No description

AsyncFileContentsManager.files_handler_classType

Default: 'jupyter_server.files.handlers.FilesHandler'

handler class to use when serving raw file requests.

Default is a fallback that talks to the ContentsManager API, which may be inefficient, especially for large files.

Local files-based ContentsManagers can use a StaticFileHandler subclass, which will be much more efficient.

Access to these files should be Authenticated.

AsyncFileContentsManager.files_handler_paramsDict

Default: {}

Extra parameters to pass to files_handler_class.

For example, StaticFileHandlers generally expect a path argument specifying the root directory from which to serve files.

AsyncFileContentsManager.hash_algorithmany of 'sha3_256'``|’shake_128’|’sha512’|’sha384’|’sha512_224’|’shake_256’|’sha3_512’|’sha3_224’|’sha256’|’md5-sha1’|’blake2b’|’sha224’|’sha3_384’|’sm3’|’sha1’|’blake2s’|’sha512_256’|’md5’``

Default: 'sha256'

Hash algorithm to use for file content, support by hashlib

AsyncFileContentsManager.hide_globsList

Default: ['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...

Glob patterns to hide in file and directory listings.

AsyncFileContentsManager.max_copy_folder_size_mbInt

Default: 500

The max folder size that can be copied

AsyncFileContentsManager.post_save_hookAny

Default: None

Python callable or importstring thereof

to be called on the path of a file just saved.

This can be used to process the file on disk, such as converting the notebook to a script or HTML via nbconvert.

It will be called as (all arguments passed by keyword):

hook(os_path=os_path, model=model, contents_manager=instance)
  • path: the filesystem path to the file just written

  • model: the model representing the file

  • contents_manager: this ContentsManager instance

AsyncFileContentsManager.pre_save_hookAny

Default: None

Python callable or importstring thereof

To be called on a contents model prior to save.

This can be used to process the structure, such as removing notebook outputs or other side effects that should not be saved.

It will be called as (all arguments passed by keyword):

hook(path=path, model=model, contents_manager=self)
  • model: the model to be saved. Includes file contents. Modifying this dict will affect the file that is stored.

  • path: the API path of the save destination

  • contents_manager: this ContentsManager instance

AsyncFileContentsManager.preferred_dirUnicode

Default: ''

Preferred starting directory to use for notebooks. This is an API path (/ separated, relative to root dir)

AsyncFileContentsManager.root_dirUnicode

Default: ''

No description

AsyncFileContentsManager.untitled_directoryUnicode

Default: 'Untitled Folder'

The base name used when creating untitled directories.

AsyncFileContentsManager.untitled_fileUnicode

Default: 'untitled'

The base name used when creating untitled files.

AsyncFileContentsManager.untitled_notebookUnicode

Default: 'Untitled'

The base name used when creating untitled notebooks.

AsyncFileContentsManager.use_atomic_writingBool

Default: True

By default notebooks are saved on disk on a temporary file and then if succefully written, it replaces the old ones.

This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota )

NotebookNotary.algorithmany of 'sha224'``|’sha3_256’|’sha3_512’|’sha256’|’sha512’|’sha3_224’|’sha3_384’|’sha1’|’blake2s’|’sha384’|’md5’|’blake2b’``

Default: 'sha256'

The hashing algorithm used to sign notebooks.

NotebookNotary.data_dirUnicode

Default: ''

The storage directory for notary secret and database.

NotebookNotary.db_fileUnicode

Default: ''

The sqlite file in which to store notebook signatures.

By default, this will be in your Jupyter data directory. You can set it to ‘:memory:’ to disable sqlite writing to the filesystem.

NotebookNotary.secretBytes

Default: b''

The secret key with which notebooks are signed.

NotebookNotary.secret_fileUnicode

Default: ''

The file where the secret key is stored.

NotebookNotary.store_factoryCallable

Default: traitlets.Undefined

A callable returning the storage backend for notebook signatures.

The default uses an SQLite database.

GatewayMappingKernelManager.allow_tracebacksBool

Default: True

Whether to send tracebacks to clients on exceptions.

GatewayMappingKernelManager.allowed_message_typesList

Default: []

White list of allowed kernel message types.

When the list is empty, all message types are allowed.

GatewayMappingKernelManager.buffer_offline_messagesBool

Default: True

Whether messages from kernels whose frontends have disconnected should be buffered in-memory.

When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity.

Disable if long-running kernels will produce too much output while no frontends are connected.

GatewayMappingKernelManager.cull_busyBool

Default: False

Whether to consider culling kernels which are busy.

Only effective if cull_idle_timeout > 0.

GatewayMappingKernelManager.cull_connectedBool

Default: False

Whether to consider culling kernels which have one or more connections.

Only effective if cull_idle_timeout > 0.

GatewayMappingKernelManager.cull_idle_timeoutInt

Default: 0

Timeout (in seconds) after which a kernel is considered idle and ready to be culled.

Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections.

GatewayMappingKernelManager.cull_intervalInt

Default: 300

The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value.

GatewayMappingKernelManager.default_kernel_nameUnicode

Default: 'python3'

The name of the default kernel to start

GatewayMappingKernelManager.kernel_info_timeoutFloat

Default: 60

Timeout for giving up on a kernel (in seconds).

On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup).

GatewayMappingKernelManager.kernel_manager_classDottedObjectName

Default: 'jupyter_client.ioloop.AsyncIOLoopKernelManager'

The kernel manager class. This is configurable to allow

subclassing of the AsyncKernelManager for customized behavior.

GatewayMappingKernelManager.root_dirUnicode

Default: ''

No description

GatewayMappingKernelManager.shared_contextBool

Default: True

Share a single zmq.Context to talk to all my kernels

GatewayMappingKernelManager.traceback_replacement_messageUnicode

Default: 'An exception occurred at runtime, which is not shown due to ...

Message to print when allow_tracebacks is False, and an exception occurs

GatewayMappingKernelManager.use_pending_kernelsBool

Default: False

Whether to make kernels available before the process has started. The

kernel has a .ready future which can be awaited before connecting

GatewayKernelSpecManager.allowed_kernelspecsSet

Default: set()

List of allowed kernel names.

By default, all installed kernels are allowed.

GatewayKernelSpecManager.ensure_native_kernelBool

Default: True

If there is no Python kernelspec registered and the IPython

kernel is available, ensure it is added to the spec list.

GatewayKernelSpecManager.kernel_spec_classType

Default: 'jupyter_client.kernelspec.KernelSpec'

The kernel spec class. This is configurable to allow

subclassing of the KernelSpecManager for customized behavior.

GatewayKernelSpecManager.whitelistSet

Default: set()

Deprecated, use KernelSpecManager.allowed_kernelspecs

SessionManager.database_filepathUnicode

Default: ':memory:'

The filesystem path to SQLite Database file (e.g. /path/to/session_database.db). By default, the session database is stored in-memory (i.e. :memory: setting from sqlite3) and does not persist when the current Jupyter Server shuts down.

GatewaySessionManager.database_filepathUnicode

Default: ':memory:'

The filesystem path to SQLite Database file (e.g. /path/to/session_database.db). By default, the session database is stored in-memory (i.e. :memory: setting from sqlite3) and does not persist when the current Jupyter Server shuts down.

BaseKernelWebsocketConnection.kernel_ws_protocolUnicode

Default: None

Preferred kernel message protocol over websocket to use (default: None). If an empty string is passed, select the legacy protocol. If None, the selected protocol will depend on what the front-end supports (usually the most recent protocol supported by the back-end and the front-end).

BaseKernelWebsocketConnection.sessionInstance

Default: None

No description

GatewayWebSocketConnection.kernel_ws_protocolUnicode

Default: ''

No description

GatewayWebSocketConnection.sessionInstance

Default: None

No description

GatewayClient.accept_cookiesBool

Default: False

Accept and manage cookies sent by the service side. This is often useful

for load balancers to decide which backend node to use. (JUPYTER_GATEWAY_ACCEPT_COOKIES env var)

GatewayClient.allowed_envsUnicode

Default: ''

A comma-separated list of environment variable names that will be included, along with their values, in the kernel startup request. The corresponding client_envs configuration value must also be set on the Gateway server - since that configuration value indicates which environmental values to make available to the kernel. (JUPYTER_GATEWAY_ALLOWED_ENVS env var)

GatewayClient.auth_header_keyUnicode

Default: ''

The authorization header’s key name (typically ‘Authorization’) used in the HTTP headers. The header will be formatted as:

{'{auth_header_key}': '{auth_scheme} {auth_token}'}

If the authorization header key takes a single value, auth_scheme should be set to None and ‘auth_token’ should be configured to use the appropriate value.

(JUPYTER_GATEWAY_AUTH_HEADER_KEY env var)

GatewayClient.auth_schemeUnicode

Default: ''

The auth scheme, added as a prefix to the authorization token used in the HTTP headers. (JUPYTER_GATEWAY_AUTH_SCHEME env var)

GatewayClient.auth_tokenUnicode

Default: None

The authorization token used in the HTTP headers. The header will be formatted as:

{'{auth_header_key}': '{auth_scheme} {auth_token}'}

(JUPYTER_GATEWAY_AUTH_TOKEN env var)

GatewayClient.ca_certsUnicode

Default: None

The filename of CA certificates or None to use defaults. (JUPYTER_GATEWAY_CA_CERTS env var)

GatewayClient.client_certUnicode

Default: None

The filename for client SSL certificate, if any. (JUPYTER_GATEWAY_CLIENT_CERT env var)

GatewayClient.client_keyUnicode

Default: None

The filename for client SSL key, if any. (JUPYTER_GATEWAY_CLIENT_KEY env var)

GatewayClient.connect_timeoutFloat

Default: 40.0

The time allowed for HTTP connection establishment with the Gateway server. (JUPYTER_GATEWAY_CONNECT_TIMEOUT env var)

GatewayClient.env_whitelistUnicode

Default: ''

Deprecated, use GatewayClient.allowed_envs

GatewayClient.event_loggerInstance

Default: None

No description

GatewayClient.gateway_retry_intervalFloat

Default: 1.0

The time allowed for HTTP reconnection with the Gateway server for the first time. Next will be JUPYTER_GATEWAY_RETRY_INTERVAL multiplied by two in factor of numbers of retries but less than JUPYTER_GATEWAY_RETRY_INTERVAL_MAX. (JUPYTER_GATEWAY_RETRY_INTERVAL env var)

GatewayClient.gateway_retry_interval_maxFloat

Default: 30.0

The maximum time allowed for HTTP reconnection retry with the Gateway server. (JUPYTER_GATEWAY_RETRY_INTERVAL_MAX env var)

GatewayClient.gateway_retry_maxInt

Default: 5

The maximum retries allowed for HTTP reconnection with the Gateway server. (JUPYTER_GATEWAY_RETRY_MAX env var)

GatewayClient.gateway_token_renewer_classType

Default: 'jupyter_server.gateway.gateway_client.GatewayTokenRenewerBase'

The class to use for Gateway token renewal. (JUPYTER_GATEWAY_TOKEN_RENEWER_CLASS env var)

GatewayClient.headersUnicode

Default: '{}'

Additional HTTP headers to pass on the request. This value will be converted to a dict.

(JUPYTER_GATEWAY_HEADERS env var)

GatewayClient.http_pwdUnicode

Default: None

The password for HTTP authentication. (JUPYTER_GATEWAY_HTTP_PWD env var)

GatewayClient.http_userUnicode

Default: None

The username for HTTP authentication. (JUPYTER_GATEWAY_HTTP_USER env var)

GatewayClient.kernels_endpointUnicode

Default: '/api/kernels'

The gateway API endpoint for accessing kernel resources (JUPYTER_GATEWAY_KERNELS_ENDPOINT env var)

GatewayClient.kernelspecs_endpointUnicode

Default: '/api/kernelspecs'

The gateway API endpoint for accessing kernelspecs (JUPYTER_GATEWAY_KERNELSPECS_ENDPOINT env var)

GatewayClient.kernelspecs_resource_endpointUnicode

Default: '/kernelspecs'

The gateway endpoint for accessing kernelspecs resources (JUPYTER_GATEWAY_KERNELSPECS_RESOURCE_ENDPOINT env var)

GatewayClient.launch_timeout_padFloat

Default: 2.0

Timeout pad to be ensured between KERNEL_LAUNCH_TIMEOUT and request_timeout such that request_timeout >= KERNEL_LAUNCH_TIMEOUT + launch_timeout_pad. (JUPYTER_GATEWAY_LAUNCH_TIMEOUT_PAD env var)

GatewayClient.request_timeoutFloat

Default: 42.0

The time allowed for HTTP request completion. (JUPYTER_GATEWAY_REQUEST_TIMEOUT env var)

GatewayClient.urlUnicode

Default: None

The url of the Kernel or Enterprise Gateway server where kernel specifications are defined and kernel management takes place. If defined, this Notebook server acts as a proxy for all kernel management and kernel specification retrieval. (JUPYTER_GATEWAY_URL env var)

GatewayClient.validate_certBool

Default: True

For HTTPS requests, determines if server’s certificate should be validated or not. (JUPYTER_GATEWAY_VALIDATE_CERT env var)

GatewayClient.ws_urlUnicode

Default: None

The websocket url of the Kernel or Enterprise Gateway server. If not provided, this value will correspond to the value of the Gateway url with ‘ws’ in place of ‘http’. (JUPYTER_GATEWAY_WS_URL env var)

EventLogger.handlersHandlers

Default: None

A list of logging.Handler instances to send events to.

When set to None (the default), all events are discarded.

ZMQChannelsWebsocketConnection.iopub_data_rate_limitFloat

Default: 1000000

(bytes/sec)

Maximum rate at which stream output can be sent on iopub before they are limited.

ZMQChannelsWebsocketConnection.iopub_msg_rate_limitFloat

Default: 1000

(msgs/sec)

Maximum rate at which messages can be sent on iopub before they are limited.

ZMQChannelsWebsocketConnection.kernel_ws_protocolUnicode

Default: None

Preferred kernel message protocol over websocket to use (default: None). If an empty string is passed, select the legacy protocol. If None, the selected protocol will depend on what the front-end supports (usually the most recent protocol supported by the back-end and the front-end).

ZMQChannelsWebsocketConnection.limit_rateBool

Default: True

Whether to limit the rate of IOPub messages (default: True). If True, use iopub_msg_rate_limit, iopub_data_rate_limit and/or rate_limit_window to tune the rate.

ZMQChannelsWebsocketConnection.rate_limit_windowFloat

Default: 3

(sec) Time window used to

check the message and data rate limits.

ZMQChannelsWebsocketConnection.sessionInstance

Default: None

No description

Changelog#

All notable changes to this project will be documented in this file.

2.14.0#

(Full Changelog)

Enhancements made#
  • Do not include token in dashboard link, when available #1406 (@minrk)

Bugs fixed#
  • Ignore zero-length page_config.json, restore previous behavior of crashing for invalid JSON #1405 (@holzman)

  • Don’t crash on invalid JSON in page_config (#1403) #1404 (@holzman)

Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @dependabot | @holzman | @krassowski | @markypizz | @minrk | @oliver-sanders | @pre-commit-ci | @welcome | @Zsailer

2.13.0#

(Full Changelog)

Enhancements made#
  • Add an option to have authentication enabled for all endpoints by default #1392 (@krassowski)

  • websockets: add configurations for ping interval and timeout #1391 (@oliver-sanders)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @hansepac | @krassowski | @manics | @minrk | @oliver-sanders | @pre-commit-ci | @Timeroot | @welcome | @yuvipanda

2.12.5#

(Full Changelog)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073

2.12.4#

(Full Changelog)

Bugs fixed#
  • Fix log arguments for gateway client error #1385 (@minrk)

Contributors to this release#

(GitHub contributors page for this release)

@minrk

2.12.3#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@mwouts | @tornaria | @welcome | @yuvipanda

2.12.2#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @bollwyvl | @epignot | @krassowski

2.12.1#

(Full Changelog)

Enhancements made#
  • log extension import time at debug level unless it’s actually slow #1375 (@minrk)

  • Add support for async Authorizers (part 2) #1374 (@Zsailer)

Contributors to this release#

(GitHub contributors page for this release)

@minrk | @Zsailer

2.12.0#

(Full Changelog)

Enhancements made#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @pre-commit-ci | @Zsailer

2.11.2#

(Full Changelog)

Contributors to this release#

(GitHub contributors page for this release)

2.11.1#

(Full Changelog)

Bugs fixed#
  • avoid unhandled error on some invalid paths #1369 (@minrk)

  • Change md5 to hash and hash_algorithm, fix incompatibility #1367 (@Wh1isper)

Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @fcollonval | @minrk | @Wh1isper

2.11.0#

(Full Changelog)

Enhancements made#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @IITII | @welcome | @Wh1isper

2.10.1#

(Full Changelog)

Bugs fixed#
  • ContentsHandler return 404 rather than raise exc #1357 (@bloomsa)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @bloomsa | @pre-commit-ci

2.10.0#

(Full Changelog)

Enhancements made#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073

2.9.1#

(Full Changelog)

Bugs fixed#
  • Revert “Update kernel env to reflect changes in session.” #1346 (@blink1073)

Contributors to this release#

(GitHub contributors page for this release)

@blink1073

2.9.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
  • Run Gateway token renewers even if the auth token is empty. #1340 (@ojarjur)

Contributors to this release#

(GitHub contributors page for this release)

@akshaychitneni | @Carreau | @ojarjur

2.8.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
  • Avoid showing “No answer for 5s” when shutdown is slow #1320 (@minrk)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @dependabot | @jayeshsingh9767 | @minrk | @pre-commit-ci | @welcome

2.7.3#

(Full Changelog)

New features added#
Contributors to this release#

(GitHub contributors page for this release)

@davidbrochart

2.7.1#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
  • send2trash now supports deleting from different filesystem type(#1290) #1291 (@wqj97)

Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@allstrive | @bhperry | @blink1073 | @emmanuel-ferdman | @Hind-M | @kevin-bates | @krassowski | @mathbunnyru | @matthewwiese | @minrk | @pre-commit-ci | @welcome | @wqj97 | @Zsailer

2.7.0#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@allstrive | @blink1073 | @fcollonval | @kevin-bates | @minrk | @pre-commit-ci | @welcome

2.6.0#

(Full Changelog)

New features added#
  • Emit events from the kernels service and gateway client #1252 (@rajmusuku)

Enhancements made#
  • Allows immutable cache for static files in a directory #1268 (@brichet)

  • Merge the gateway handlers into the standard handlers. #1261 (@ojarjur)

  • Gateway manager retry kernel updates #1256 (@ojarjur)

  • Use debug-level messages for generating anonymous users #1254 (@hbcarlos)

  • Define a CURRENT_JUPYTER_HANDLER context var #1251 (@Zsailer)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @brichet | @codecov | @davidbrochart | @dependabot | @echarles | @frenzymadness | @hbcarlos | @kevin-bates | @lresende | @minrk | @ojarjur | @pre-commit-ci | @rajmusuku | @SauravMaheshkar | @welcome | @yuvipanda | @Zsailer

2.5.0#

(Full Changelog)

Enhancements made#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @broden-wanner | @codecov | @welcome | @Zsailer

2.4.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
  • Fix port selection #1229 (@blink1073)

  • Fix priority of deprecated NotebookApp.notebook_dir behind ServerApp.root_dir (#1223 #1223 (@minrk)

  • Ensure content-type properly reflects gateway kernelspec resources #1219 (@kevin-bates)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @Carreau | @codecov | @codecov-commenter | @davidbrochart | @dcsaba89 | @echarles | @kenyaachon | @kevin-bates | @minrk | @vidartf | @welcome | @Zsailer

2.3.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
  • Redact tokens, etc. in url parameters from request logs #1212 (@minrk)

  • Fix get_loader returning None when load_jupyter_server_extension is not found (#1193)Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> #1193 (@cmd-ntrf)

Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @cmd-ntrf | @codecov | @dcsaba89 | @meeseeksdev | @minrk | @pre-commit-ci | @schnell18 | @welcome

2.2.1#

(Full Changelog)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov | @jonnygrout | @minrk | @welcome

2.2.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
  • Don’t assume that resources entries are relative #1182 (@ojarjur)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @Carreau | @codecov | @kevin-bates | @minrk | @ojarjur | @welcome | @yuvipanda

2.1.0#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov | @vidartf

2.0.7#

(Full Changelog)

Enhancements made#
Bugs fixed#
  • Reapply preferred_dir fix, now with better backwards compatibility #1162 (@vidartf)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @Carreau | @codecov | @consideRatio | @meeseeksdev | @pre-commit-ci | @vidartf | @welcome | @yuvipanda

2.0.6#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov | @mahendrapaipuri | @welcome

2.0.5#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@Carreau | @codecov | @krassowski

2.0.4#

(Full Changelog)

Bugs fixed#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073

2.0.3#

(Full Changelog)

Bugs fixed#
  • Restore default writing of browser open redirect file, add opt-in to skip #1144 (@bollwyvl)

Contributors to this release#

(GitHub contributors page for this release)

@bollwyvl

2.0.2#

(Full Changelog)

Bugs fixed#
  • Raise errors on individual problematic extensions when listing extension #1139 (@Zsailer)

  • Find an available port before starting event loop #1136 (@blink1073)

  • only write browser files if we’re launching the browser #1133 (@hhuuggoo)

  • Logging message used to list sessions fails with template error #1132 (@vindex10)

  • Include base_url at start of kernelspec resources path #1124 (@bloomsa)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @bloomsa | @codecov | @hhuuggoo | @kevin-bates | @vidartf | @vindex10 | @welcome | @Zsailer

2.0.1#

(Full Changelog)

Enhancements made#
  • [Gateway] Remove redundant list kernels request during session poll #1112 (@kevin-bates)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov | @dependabot | @kevin-bates | @ofek | @ophie200 | @welcome

2.0.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Deprecated features#
Contributors to this release#

(GitHub contributors page for this release)

@3coins | @a3626a | @akshaychitneni | @blink1073 | @bloomsa | @Carreau | @CiprianAnton | @codecov | @codecov-commenter | @danielyahn | @davidbrochart | @dependabot | @divyansshhh | @dlqqq | @echarles | @ellisonbg | @epignot | @fcollonval | @hbcarlos | @jiajunjie | @kevin-bates | @kiersten-stokes | @krassowski | @meeseeksdev | @minrk | @ofek | @oliver-sanders | @pre-commit-ci | @razrotenberg | @rickwierenga | @thetorpedodog | @vidartf | @welcome | @wjsi | @yacchin1205 | @Zsailer

2.0.0rc8#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov | @kevin-bates | @Zsailer

2.0.0rc7#

(Full Changelog)

Bugs fixed#
  • Use handle_outgoing_message for ZMQ replies #1089 (@Zsailer)

  • Call ports_changed on the multi-kernel-manager instead of the kernel manager #1088 (@Zsailer)

  • Add more websocket connection tests and fix bugs #1085 (@blink1073)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov | @Zsailer

2.0.0rc6#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@codecov | @davidbrochart | @pre-commit-ci

2.0.0rc5#

(Full Changelog)

Enhancements made#
  • New configurable/overridable kernel ZMQ+Websocket connection API #1047 (@Zsailer)

  • Add authorization to AuthenticatedFileHandler #1021 (@jiajunjie)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov | @codecov-commenter | @jiajunjie | @minrk | @oliver-sanders | @pre-commit-ci | @welcome | @yacchin1205 | @Zsailer

2.0.0rc4#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @Carreau | @codecov-commenter | @dependabot | @divyansshhh | @fcollonval | @pre-commit-ci

2.0.0rc3#

(Full Changelog)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter

2.0.0rc2#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@a3626a | @blink1073 | @codecov-commenter | @kevin-bates | @pre-commit-ci | @welcome

2.0.0rc1#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
  • Update server extension disable instructions #998 (@3coins)

Deprecated features#
Contributors to this release#

(GitHub contributors page for this release)

@3coins | @blink1073 | @codecov-commenter | @divyansshhh | @kevin-bates | @meeseeksdev | @pre-commit-ci

2.0.0rc0#

(Full Changelog)

New features added#
Enhancements made#
  • Accept and manage cookies when requesting gateways #969 (@wjsi)

  • Emit events from the Contents Service #954 (@Zsailer)

  • Retry certain errors between server and gateway #944 (@kevin-bates)

  • Allow new file types #895 (@davidbrochart)

  • Adds anonymous users #863 (@hbcarlos)

  • switch to jupyter_events #862 (@Zsailer)

  • Make it easier for extensions to customize the ServerApp #879 (@minrk)

  • consolidate auth config on IdentityProvider #825 (@minrk)

  • Show import error when failing to load an extension #878 (@minrk)

  • Add the root_dir value to the logging message in case of non compliant preferred_dir #804 (@echarles)

  • Hydrate a Kernel Manager when calling GatewayKernelManager.start_kernel with a kernel_id #788 (@Zsailer)

  • Remove terminals in favor of jupyter_server_terminals extension #651 (@Zsailer)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Other merged PRs#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @echarles | @epignot | @krassowski | @pre-commit-ci | @razrotenberg | @welcome | @wjsi | @Zsailer

2.0.0b1#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@akshaychitneni | @blink1073 | @codecov-commenter | @danielyahn | @davidbrochart | @dlqqq | @hbcarlos | @kevin-bates | @kiersten-stokes | @meeseeksdev | @minrk | @pre-commit-ci | @thetorpedodog | @vidartf | @welcome | @Zsailer

2.0.0b0#

(Full Changelog)

Enhancements made#
  • Make it easier for extensions to customize the ServerApp #879 (@minrk)

  • consolidate auth config on IdentityProvider #825 (@minrk)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @CiprianAnton | @codecov-commenter | @dlqqq | @minrk | @pre-commit-ci | @rickwierenga | @thetorpedodog | @welcome | @Zsailer

2.0.0a2#

(Full Changelog)

Enhancements made#
  • Show import error when failing to load an extension #878 (@minrk)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @Carreau | @CiprianAnton | @codecov-commenter | @davidbrochart | @echarles | @kevin-bates | @martinRenou | @minrk | @pre-commit-ci

2.0.0a1#

(Full Changelog

2.0.0a0#

(Full Changelog)

New features added#
Enhancements made#
  • Add the root_dir value to the logging message in case of non compliant preferred_dir #804 (@echarles)

  • Hydrate a Kernel Manager when calling GatewayKernelManager.start_kernel with a kernel_id #788 (@Zsailer)

  • Remove terminals in favor of jupyter_server_terminals extension #651 (@Zsailer)

Bugs fixed#
  • Defer preferred_dir validation until root_dir is set #826 (@kevin-bates)

  • missing required arguments in utils.fetch #798 (@minrk)

Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@andreyvelich | @blink1073 | @bollwyvl | @codecov-commenter | @davidbrochart | @echarles | @hbcarlos | @kevin-bates | @meeseeksdev | @mgorny | @minrk | @pre-commit-ci | @SylvainCorlay | @welcome | @Wh1isper | @willingc | @Zsailer

1.17.0#

(Full Changelog)

Enhancements made#
  • Add the root_dir value to the logging message in case of non compliant preferred_dir #804 (@echarles)

Bugs fixed#
  • missing required arguments in utils.fetch #798 (@minrk)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @davidbrochart | @echarles | @kevin-bates | @meeseeksdev | @meeseeksmachine | @Wh1isper | @Zsailer

1.16.0#

(Full Changelog)

New features added#
Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Other merged PRs#
  • Handle importstring pre/post save hooks #754 (@dleen)

Contributors to this release#

(GitHub contributors page for this release)

@andreyvelich | @blink1073 | @codecov-commenter | @divyansshhh | @dleen | @fcollonval | @jhamet93 | @meeseeksdev | @minrk | @rccern | @welcome | @Zsailer

1.15.6#

(Full Changelog)

Bugs fixed#
  • Missing warning when no authorizer in found ZMQ handlers #744 (@Zsailer)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @Zsailer

1.15.5#

(Full Changelog)

Bugs fixed#
  • Relax type checking on ExtensionApp.serverapp #739 (@minrk)

  • raise no-authorization warning once and allow disabled authorization #738 (@Zsailer)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @minrk | @Zsailer

1.15.3#

(Full Changelog)

Bugs fixed#
  • Fix server-extension paths (3rd time’s the charm) #734 (@minrk)

  • Revert “Server extension paths (#730)” #732 (@blink1073)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @minrk

1.15.2#

(Full Changelog)

Bugs fixed#
  • Server extension paths #730 (@minrk)

  • allow handlers to work without an authorizer in the Tornado settings #717 (@Zsailer)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @minrk | @Zsailer

1.15.1#

(Full Changelog)

Bugs fixed#
  • Revert “Reuse ServerApp.config_file_paths for consistency (#715)” #728 (@blink1073)

Contributors to this release#

(GitHub contributors page for this release)

@blink1073

1.15.0#

(Full Changelog)

New features added#
  • Add authorization layer to server request handlers #165 (@Zsailer)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @davidbrochart | @echarles | @EricCousineau-TRI | @jhamet93 | @kevin-bates | @minrk | @vidartf | @welcome | @Wh1isper | @Zsailer

1.13.5#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @davidbrochart | @echarles | @github-actions | @jasongrout | @krassowski | @maartenbreddels | @SylvainCorlay | @Vishwajeet0510 | @vkaidalov | @welcome | @Wh1isper | @Zsailer

1.13.4#

(Full Changelog)

Bugs fixed#
Contributors to this release#

(GitHub contributors page for this release)

@codecov-commenter | @davidbrochart | @Zsailer

1.13.3#

(Full Changelog)

Enhancements made#
  • More updates to unit tests for pending kernels work #662 (@Zsailer)

Bugs fixed#
Contributors to this release#

(GitHub contributors page for this release)

@Zsailer

1.13.2#

(Full Changelog)

Enhancements made#
  • Don’t block the event loop when exporting with nbconvert #655 (@davidbrochart)

  • Add more awaits for pending kernel in unit tests #654 (@Zsailer)

  • Print IPv6 url as hostname or enclosed in brackets #652 (@op3)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@adamjstewart | @blink1073 | @ccw630 | @codecov-commenter | @davidbrochart | @echarles | @fcollonval | @kevin-bates | @op3 | @welcome | @Wh1isper | @Zsailer

1.13.1#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @Zsailer

1.13.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @echarles | @JohanMabille | @jtpio | @Zsailer

1.12.1#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @jtpio

1.12.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
  • Set xsrf cookie on base url #612 (@minrk)

  • Update jpserver_extensions trait to work with traitlets 5.x #610 (@Zsailer)

  • Fix allow_origin_pat property to properly parse regex #603 (@havok2063)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @havok2063 | @minrk | @mwakaba2 | @toonn | @welcome | @Zsailer

1.11.2#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@codecov-commenter | @dependabot | @kevin-bates | @stdll00 | @welcome | @Wh1isper | @Zsailer

1.11.1#

(Full Changelog)

Bugs fixed#
  • Do not log connection error if the kernel is already shutdown #584 (@martinRenou)

  • [BUG]: allow None for min_open_files_limit trait #587 (@Zsailer)

Contributors to this release#

(GitHub contributors page for this release)

@codecov-commenter | @martinRenou | @Zsailer

1.11.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @Carreau | @codecov-commenter | @fcollonval | @martinRenou | @oliver-sanders | @vidartf

1.10.2#

(Full Changelog)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
  • Fix typo in allow_password_change help #559 (@manics)

Contributors to this release#

(GitHub contributors page for this release)

@afshin | @codecov-commenter | @echarles | @manics | @mariobuikhuizen | @oliver-sanders | @welcome | @Zsailer

1.10.1#

(Full Changelog)

Bugs fixed#
Contributors to this release#

(GitHub contributors page for this release)

@fcollonval

1.10.0#

(Full Changelog)

Enhancements made#
Bugs fixed#
Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @davidbrochart | @goanpeca | @kevin-bates | @martinRenou | @oliver-sanders | @welcome | @Zsailer

1.9.0#

(Full Changelog)

Enhancements made#
  • enable a way to run a task when an io_loop is created #531 (@eastonsuo)

  • adds GatewayClient.auth_scheme configurable #529 (@telamonian)

  • [Notebook port 4835] Add UNIX socket support to notebook server #525 (@jtpio)

Bugs fixed#
Maintenance and upkeep improvements#
Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@blink1073 | @codecov-commenter | @davidbrochart | @eastonsuo | @icankeep | @jtpio | @kevin-bates | @krassowski | @telamonian | @vidartf | @welcome | @Zsailer

1.8.0#

(Full Changelog)

Enhancements made#
  • Expose a public property to sort extensions deterministically. #522 (@Zsailer)

Bugs fixed#
  • init_httpserver at the end of initialize #517 (@minrk)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@codecov-commenter | @jtpio | @minrk | @mwakaba2 | @vidartf | @welcome | @Zsailer

1.7.0#

(Full Changelog)

Bugs fixed#
Enhancements made#
  • Make nbconvert root handler asynchronous #512 (@hMED22)

  • Refactor gateway kernel management to achieve a degree of consistency #483 (@kevin-bates)

Maintenance and upkeep improvements#
  • Remove Packaging Dependency #515 (@jtpio)

  • Use kernel_id for new kernel if it doesn’t exist in MappingKernelManager.start_kernel #511 (@the-higgs)

  • Include backtrace in debug output when extension fails to load #506 (@candlerb)

  • ExtensionPoint: return True on successful validate() #503 (@minrk)

  • ExtensionManager: load default config manager by default #502 (@minrk)

  • Prep for Release Helper Usage #494 (@jtpio)

  • Typo in shutdown with answer_yes #491 (@kiendang)

  • Remove some of ipython_genutils no-op. #440 (@Carreau)

  • Drop dependency on pywin32 #514 (@kevin-bates)

  • Upgrade anyio to v3 #492 (@mwakaba2)

  • Add Appropriate Token Permission for CodeQL Workflow #489 (@afshin)

Documentation improvements#
Contributors to this release#

(GitHub contributors page for this release)

@codecov-commenter | @hMED22 | @jtpio | @kevin-bates | @the-higgs | @welcome @blink1073 | @candlerb | @kevin-bates | @minrk | @mwakaba2 | @Zsailer | @kiendang | [@Carreau] (https://github.com/search?q=repo%3Ajupyter-server%2Fjupyter_server+involves%3ACarreau+updated%3A2021-04-21..2021-05-01&type=Issues)

1.6.4#

(Full Changelog)

Bugs fixed#
Contributors to this release#

(GitHub contributors page for this release)

@afshin

1.6.3#

(Full Changelog)

Merges#
  • Gate anyio version. 2b51ee3

  • Fix activity tracking and nudge issues when kernel ports change on restarts #482 (@kevin-bates)

Contributors to this release#

(GitHub contributors page for this release)

@kevin-bates

1.6.2#
Enhancements made#
Bugs fixed#
  • Re-enable support for answer_yes flag #479 (@jtpio)

Maintenance and upkeep improvements#
Contributors to this release#

(GitHub contributors page for this release)

@jtpio

1.6.1#

(Full Changelog)

Merged PRs#
Contributors to this release#

(GitHub contributors page for this release)

@codecov-io | @davidbrochart | @echarles | @faucct | @jtpio | @welcome

1.6.0#

(Full Changelog)

New features added#
  • Add env variable support for port options #461 (@afshin)

Enhancements made#
Maintenance and upkeep improvements#
Documentation improvements#
  • Add Changelog to Sphinx Docs #465 (@afshin)

  • Update description for kernel restarted in the API docs #463 (@jtpio)

  • Delete the extra “or” that prevents easy cut-and-paste of URLs. #460 (@jasongrout)

  • Add descriptive log for port unavailable and port-retries=0 #459 (@afshin)

Other merged PRs#
Contributors to this release#

(GitHub contributors page for this release)

@afshin | @codecov-io | @echarles | @jasongrout | @jtpio | @kevin-bates | @vidartf

1.5.1#

(Full Changelog)

Merged pull requests:

  • Ensure jupyter config dir exists #454 (@afshin)

  • Allow pre_save_hook to cancel save with HTTPError #456 (@minrk)

Contributors to this release:

(GitHub contributors page for this release)

@afshin | @minrk

1.5.0#

(Full Changelog)

Merged pull requests:

Contributors to this release:

(GitHub contributors page for this release)

@afshin | @blink1073 | @codecov-io | @jtpio | @kevin-bates | @kiendang | @minrk | @sngyo | @Zsailer

1.4.1 (2021-02-22)#

Full Changelog

Merged pull requests:

Contributors to this release:

(GitHub contributors page for this release)

@jamesmishra | @Zsailer

1.4.0 (2021-02-18)#

Full Changelog

Merged pull requests:

1.3.0 (2021-02-04)#

Full Changelog

Merged pull requests (includes those from broken 1.2.3 release):

  • Special case ExtensionApp that starts the ServerApp #401 (afshin)

  • only use deprecated notebook_dir config if root_dir is not set #400 (minrk)

  • Use async kernel manager by default #399 (kevin-bates)

  • Revert Session.username default value change #398 (mwakaba2)

  • Re-enable default_url in ExtensionApp #393 (afshin)

  • Enable notebook ContentsManager in jupyter_server #392 (afshin)

  • Use jupyter_server_config.json as config file in the update password api #390 (echarles)

  • Increase culling test idle timeout #388 (kevin-bates)

  • update changelog for 1.2.2 #387 (Zsailer)

1.2.3 (2021-01-29)#

This was a broken release and was yanked from PyPI.

Full Changelog

Merged pull requests:

  • Re-enable default_url in ExtensionApp #393 (afshin)

  • Enable notebook ContentsManager in jupyter_server #392 (afshin)

  • Use jupyter_server_config.json as config file in the update password api #390 (echarles)

  • Increase culling test idle timeout #388 (kevin-bates)

  • update changelog for 1.2.2 #387 (Zsailer)

1.2.2 (2021-01-14)#

Merged pull requests:

1.2.1 (2021-01-08)#

Full Changelog

Merged pull requests:

  • Enable extensions to set debug and open-browser flags #379 (afshin)

  • Add reconnection to Gateway #378 (oyvsyo)

1.2.0 (2021-01-07)#

Full Changelog

Merged pull requests:

  • Flip default value for open_browser in extensions #377 (ajbozarth)

  • Improve Handling of the soft limit on open file handles #376 (afshin)

  • Handle open_browser trait in ServerApp and ExtensionApp differently #375 (afshin)

  • Add setting to disable redirect file browser launch #374 (afshin)

  • Make trust handle use ensure_async #373 (vidartf)

1.1.4 (2021-01-04)#

Full Changelog

Merged pull requests:

1.1.3 (2020-12-23)#

Full Changelog

Merged pull requests:

  • Culling: ensure last_activity attr exists before use #365 (afshin)

1.1.2 (2020-12-21)#

Full Changelog

Merged pull requests:

  • Nudge kernel with info request until we receive IOPub messages #361 (SylvainCorlay)

1.1.1 (2020-12-16)#

Full Changelog

Merged pull requests:

  • Fix: await possible async dir_exists method #363 (mwakaba2)

1.1.0 (2020-12-11)#

Full Changelog

Merged pull requests:

1.0.6 (2020-11-18)#

1.0.6 is a security release, fixing one vulnerability:

Changed#
  • Fix open redirect vulnerability GHSA-grfj-wjv9-4f9v (CVE-2020-26232)

1.0 (2020-9-18)#
Added.#
  • Added a basic, styled login.html template. (220, 295)

  • Added new extension manager API for handling server extensions. (248, 265, 275, 303)

  • The favicon and Jupyter logo are now available under jupyter_server’s static namespace. (284)

Changed.#
  • load_jupyter_server_extension should be renamed to _load_jupyter_server_extension in server extensions. Server now throws a warning when the old name is used. (213)

  • Docs for server extensions now recommend using authenticated decorator for handlers. (219)

  • _load_jupyter_server_paths should be renamed to _load_jupyter_server_points in server extensions. (277)

  • static_url_prefix in ExtensionApps is now a configurable trait. (289)

  • extension_name trait was removed in favor of name. (232)

  • Dropped support for Python 3.5. (296)

  • Made the config_dir_name trait configurable in ConfigManager. (297)

Removed for now removed features.#
  • Removed ipykernel as a dependency of jupyter_server. (255)

Fixed for any bug fixes.#
  • Prevent a re-definition of prometheus metrics if notebook package already imports them. (#210)

  • Fixed terminals REST API unit tests that weren’t shutting down properly. (221)

  • Fixed jupyter_server on Windows for Python < 3.7. Added patch to handle subprocess cleanup. (240)

  • base_url was being duplicated when getting a url path from the ServerApp. (280)

  • Extension URLs are now properly prefixed with base_url. Previously, all static paths were not. (285)

  • Changed ExtensionApp mixin to inherit from HasTraits. This broke in traitlets 5.0 (294)

  • Replaces urlparse with url_path_join to prevent URL squashing issues. (304)

[0.3] - 2020-4-22#
Added#
  • (#191) Async kernel management is now possible using the AsyncKernelManager from jupyter_client

  • (#201) Parameters can now be passed to new terminals created by the terminals REST API.

Changed#
  • (#196) Documentation was rewritten + refactored to use pydata_sphinx_theme.

  • (#174) ExtensionHandler was changed to an Mixin class, i.e. ExtensionHandlerMixin

Removed#
  • (#194) The bundlerextension entry point was removed.

[0.2.1] - 2020-1-10#
Added#
  • pytest-plugin for Jupyter Server.

    • Allows one to write async/await syntax in tests functions.

    • Some particularly useful fixtures include:

      • serverapp: a default ServerApp instance that handles setup+teardown.

      • configurable_serverapp: a function that returns a ServerApp instance.

      • fetch: an awaitable function that tests makes requests to the server API

      • create_notebook: a function that writes a notebook to a given temporary file path.

[0.2.0] - 2019-12-19#
Added#
  • extension submodule (#48)

    • ExtensionApp - configurable JupyterApp-subclass for server extensions

      • Most useful for Jupyter frontends, like Notebook, JupyterLab, nteract, voila etc.

      • Launch with entrypoints

      • Configure from file or CLI

      • Add custom templates, static assets, handlers, etc.

      • Static assets are served behind a /static/<extension_name> endpoint.

      • Run server extensions in “standalone mode” (#70 and #76)

    • ExtensionHandler - tornado handlers for extensions.

      • Finds static assets at /static/<extension_name>

Changed#
  • jupyter serverextension <command> entrypoint has been changed to jupyter server extension <command>.

  • toggle_jupyter_server and validate_jupyter_server function no longer take a Logger object as an argument.

  • Changed testing framework from nosetests to pytest (#152)

    • Depend on pytest-tornasync extension for handling tornado/asyncio eventloop

    • Depend on pytest-console-scripts for testing CLI entrypoints

  • Added Github actions as a testing framework along side Travis and Azure (#146)

Removed#
  • Removed the option to update root_dir trait in FileContentsManager and MappingKernelManager in ServerApp (#135)

Fixed#
Security#
  • Added a “secure_write to function for cookie/token saves (#77)