Skip to content

REST API guide

Introduction

The pSeven Enterprise REST API is a protocol to run and operate existing workflows or AppsHub apps.

REST API versions

pSeven Enterprise v2022.06 and newer use the updated REST API v2 described in this guide. Previous REST API version (v1) is supported for compatibility only and is no longer documented.

The API base URLs are:

  • {sign-in URL}/pseven/.rest/v2/ - workflow API
  • {sign-in URL}/appshub/.rest/v2/ - app API

The {sign-in URL} is the one you open to sign in to pSeven Enterprise with a web browser - for example, https://pseven.online.

Using the workflow or app API requires an authorization token, which must be added to the Authorization header in API requests - see API access further.

The API provides read-only access to workflows and apps: since many REST clients can work with the same workflow or app at the same time, none of them should be allowed to edit that workflow or app, to avoid conflicts. To change workflow or app inputs, a REST client has to deploy a new workflow or app run first, then edit the settings of that run.

The following sections guide you through a typical run deployment and execution scenario, providing API usage examples for Python and the command line.

  • Python examples found in this guide require Requests (import requests).
  • Command-line examples require curl and jq.

The examples assume that you have the API base URLs, the authorization token and header stored in variables:

# Replace 'https://pseven.online' below with your actual sign-in URL.
api_url = 'https://pseven.online/pseven/.rest/v2/{}'
api_url_appshub = 'https://pseven.online/appshub/.rest/v2/{}'
# Replace the example value with your actual token.
auth_token = '7a592349be99d8affc6739c1ff8fec98030ff14f'
auth_header = {'Authorization': 'Token ' + auth_token}
# Replace 'https://pseven.online' here with your actual sign-in URL.
$ export DA__P7__REST_BASEURL="https://pseven.online/pseven/.rest/v2/"
$ export DA__P7__REST_BASEURL_APPSHUB="https://pseven.online/appshub/.rest/v2/"
# Replace the value here with your actual token.
$ export DA__P7__REST_AUTHTOKEN="7a592349be99d8affc6739c1ff8fec98030ff14f"
$ export DA__P7__REST_AUTHHEADER="Authorization: Token ${DA__P7__REST_AUTHTOKEN}"

Note that the trailing slash / is required in endpoint URLs, as shown in the example requests.

API access

API access requires an authorization token, which must be added to the Authorization header in API requests. You can get the token from your user menu in pSeven Enterprise Studio, or get it through the API with a username and a password.

To get the token in pSeven Enterprise Studio:

  1. Click your user icon on the upper right to open the user menu.
  2. In the user menu, click API token.
  3. Copy the token from the dialog box that appears.

To get the token through the API, send a POST request to auth/login/ with the username and the password as parameters in the request body:

# Get an authorization token.
endpoint = 'auth/login/'
response = requests.post(
    api_url.format(endpoint),
    headers={'Content-Type': 'application/json'},
    data='{"username": "user@work.org", "password": "secret"}'
)
auth_token = response.json()['token']

Response data:

{
    "token": "7a592349be99d8affc6739c1ff8fec98030ff14f",
}
# Get an authorization token.
$ curl \
    --request POST \
    ${DA__P7__REST_BASEURL}auth/login/ \
    -H "Content-Type: application/json" \
    --data '{"username": "user@work.org", "password": "secret"}' \
    | jq .token

Output:

"7a592349be99d8affc6739c1ff8fec98030ff14f"

Run setup

To set up a workflow run, you will need to get the ID of the workflow you are going to launch, deploy its new run, and set run parameters.

The examples below explain how to get the workflow ID through the API, but you can also get the ID from pSeven Enterprise Studio:

  1. Select the workflow in the Explorer pane.
  2. Use the Copy REST API URL command from the Explorer menu.

The above command copies the workflow URL with the workflow ID to the clipboard. For example, in the URL below a14631d477134c46a10518bc36e8e2cb is the workflow ID:

https://pseven.online/pseven/.rest/v2/workflows/a14631d477134c46a10518bc36e8e2cb/

Workflow information

To get a list of workflows, send a GET request to workflows/. The response will list all existing workflows with their names and IDs.

# Get a list of workflows.
endpoint = 'workflows/'
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
workflows = response.json()

Response data:

[
    {
        "id": "130d943e100e46e0b5da490ac005b70e",
        "name": "ToyWorkflow",
        "path": "/Users/1/ToyWorkflow.p7wf",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
    }
    {
        "id": "a5017c372a76421a8ba41f91788f1b60",
        "name": "AnotherToyWorkflow",
        "path": "/Users/1/AnotherToyWorkflow.p7wf",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/a5017c372a76421a8ba41f91788f1b60/"
    }
]
# Get a list of workflows.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/ \
    | jq .

Output:

[
    {
        "id": "130d943e100e46e0b5da490ac005b70e",
        "name": "ToyWorkflow",
        "path": "/Users/1/ToyWorkflow.p7wf",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
    }
    {
        "id": "a5017c372a76421a8ba41f91788f1b60",
        "name": "AnotherToyWorkflow",
        "path": "/Users/1/AnotherToyWorkflow.p7wf",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/a5017c372a76421a8ba41f91788f1b60/"
    }
]

You can get more detailed information for a specific workflow by sending a GET request to workflows/{workflow ID}/. A response to this request contains, in particular, a list of all runs for the workflow under {workflow ID}. Another way to get a list of runs is to send a GET request to workflows/{workflow ID}/runs/.

toy_wf = workflows[0]

# Get a list of workflow runs from workflow details.
endpoint = 'workflows/{}/'.format(toy_wf['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
workflow_runs = response.json()['runs']

# Get a list of workflow runs.
endpoint = 'workflows/{}/runs/'.format(toy_wf['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
workflow_runs = response.json()

Response data:

[
    {
        "name": "#1 (ToyWorkflow)",
        "path": "/Users/1/ToyWorkflow.p7wf/@Runs/#00000001.p7run",
        "id": "719572a9568d4ef0a2076b73ac536524",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/719572a9568d4ef0a2076b73ac536524/",
        "state": "FINISHED",
        "workflow": "/Users/1/ToyWorkflow.p7wf",
        "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
    },
    {
        "name": "#2 (ToyWorkflow)",
        "path": "/Users/1/ToyWorkflow.p7wf/@Runs/#00000002.p7run",
        "id": "9ce34c088ded42a18e1af16353060340",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/9ce34c088ded42a18e1af16353060340/",
        "state": "CONFIGURATION",
        "workflow": "/Users/1/ToyWorkflow.p7wf",
        "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
    }
]
# Get workflow details and show its runs only.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/ \
    | jq .runs

# Get a list of workflow runs.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ \
    | jq .

Output:

[
    {
        "name": "#1 (ToyWorkflow)",
        "path": "/Users/1/ToyWorkflow.p7wf/@Runs/#00000001.p7run",
        "id": "719572a9568d4ef0a2076b73ac536524",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/719572a9568d4ef0a2076b73ac536524/",
        "state": "FINISHED",
        "workflow": "/Users/1/ToyWorkflow.p7wf",
        "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
    },
    {
        "name": "#2 (ToyWorkflow)",
        "path": "/Users/1/ToyWorkflow.p7wf/@Runs/#00000002.p7run",
        "id": "9ce34c088ded42a18e1af16353060340",
        "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/9ce34c088ded42a18e1af16353060340/",
        "state": "CONFIGURATION",
        "workflow": "/Users/1/ToyWorkflow.p7wf",
        "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
    }
]

The second run in the example above is in CONFIGURATION state, which means it is already deployed and waiting for launch. This is the only state which allows changing run parameters, uploading files, and starting the run.

Run deployment

To deploy a new run, send a POST request to workflows/{workflow ID}/runs/. The request body may specify the optional name and path parameters.

A request without parameters deploys a new run to the @Runs subdirectory in the workflow. In this case, the run gets a default, automatically generated name.

# Deploy a new run with a default name and location.
headers = {'Content-Type': 'application/json'}
headers.update(auth_header)
endpoint = 'workflows/{}/runs/'.format(toy_wf['id'])
response = requests.post(
    api_url.format(endpoint),
    headers=headers
)
toy_run = response.json()

Response:

201 Created

Response data:

{
    "name": "#3 (ToyWorkflow)",
    "path": "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run",
    "id": "ea51478af1c94fcea879d1bdd63faadc",
    "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/",
    "state": "QUEUED_TO_INITIALIZE",
    "workflow": "/Users/1/ToyWorkflow.p7wf",
    "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
}
# Deploy a new run with a default name and location.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    --request POST \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ \
    -H "Content-Type: application/json" | jq .

Output:

{
    "name": "#3 (ToyWorkflow)",
    "path": "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run",
    "id": "ea51478af1c94fcea879d1bdd63faadc",
    "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/",
    "state": "QUEUED_TO_INITIALIZE",
    "workflow": "/Users/1/ToyWorkflow.p7wf",
    "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
}

You may specify name and omit path to deploy a named run. In this case, the run is also deployed to @Runs but has a custom name.

To deploy a new run to a different directory, specify the absolute path to this directory as path. In this case, name is also required. A request with the name and path parameters in its body creates a new run in the name subdirectory under path.

# Deploy a new named run into the specified directory.
run_name = 'Toy named run'
deploy_to = 'Toy runs'
user_id = 1

headers = {'Content-Type': 'application/json'}
headers.update(auth_header)
endpoint = 'workflows/{}/runs/'.format(toy_wf['id'])
body = '{{"path": "/Users/{}/{}", "name": "{}"}}'.format(user_id, deploy_to, run_name)
response = requests.post(
    api_url.format(endpoint),
    headers=headers,
    data=body
)
toy_named_run = response.json()

Response data:

{
    "name": "Toy named run",
    "path": "/Users/1/Toy runs/Toy named run.p7run",
    "id": "32e0f692d0e945e5b6bbbfaa7d2ff308",
    "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/32e0f692d0e945e5b6bbbfaa7d2ff308/",
    "state": "QUEUED_TO_INITIALIZE",
    "workflow": "/Users/1/ToyWorkflow.p7wf",
    "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
}
# Deploy a new named run into the specified directory.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    --request POST \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ \
    -H "Content-Type: application/json" \
    --data '{"path": "/Users/1/Toy runs", "name": "Toy named run"}' | jq .

Output:

{
    "name": "Toy named run",
    "path": "/Users/1/Toy runs/Toy named run.p7run",
    "id": "32e0f692d0e945e5b6bbbfaa7d2ff308",
    "url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/32e0f692d0e945e5b6bbbfaa7d2ff308/",
    "state": "QUEUED_TO_INITIALIZE",
    "workflow": "/Users/1/ToyWorkflow.p7wf",
    "workflow_url": "https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/"
}

The response to the run deployment request will contain the run ID and a few other details. Run state will be QUEUED_TO_INITIALIZE, which means it has just entered the deployment queue and is not ready for configuring yet.

Before changing the run parameters, get the run details by sending a GET request to workflows/{workflow ID}/runs/{run ID}/. Verify that the run state is CONFIGURATION, otherwise your run setup requests will be rejected.

# Check run state.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
run_state = response.json()['state']

Run state in the response data should be:

'CONFIGURATION'
# Check run state.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    | jq .state

Output should be:

"CONFIGURATION"

Run parameters

Change the run parameters by sending a PATCH request to workflows/{workflow ID}/runs/{run ID}/. The request body should contain run details with new parameter values. The response body will contain updated run details.

Parameters are an array in run details (.parameters). Each parameter has a unique identifier (.parameters[{index}].id). New parameter identifiers are generated in every new run you deploy. Parameter name (.parameters[{index}].name) is usually the same as the name of the port mapped to the parameter. If this name is not unique in the workflow, it is prefixed with the block name; if the prefixed name is also not unique, it is additionally prefixed with the name of the parent Composite block, and so on. For example:

  • "MyFirstParameter" and "MySecondParameter" are names of parameters mapped to the input ports of some block or two different blocks. In this case the ports are named MyFirstParameter and MySecondParameter, respectively.
  • "OneBlock/MyParameter" and "TwoBlock/MyParameter" are names of parameters mapped to input ports of two different blocks. In this case, ports have the same name, so parameter names are prefixed with block names.
  • "Composite 1/MyBlock/MyParameter" and "Composite 2/MyBlock/MyParameter" are names of parameters mapped to input ports of two different blocks nested in two different Composite block. In this case, port and block names are the same, so parameter names include the names of Composite blocks.

It is recommended to identify parameters by id, and to use name as a display name only, if you develop a REST client with a GUI.

Parameter value is found in .parameters[{index}].value.data; other keys in .parameters[{index}].value provide access to value metainformation.

# Get run parameter IDs, names, and values.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
run_params = response.json()['parameters']
run_params_values = {p['id']: {'name': p['name'], 'value': p['value']['data']} for p in run_params}

The run_params_values dictionary should be similar to:

{
    'fd10fa8bafb642f29ca89542504970ae':
    {
        'name': 'MyFirstParameter',
        'value': 'string_value'
    },
    'db4bc72e7a594ba9b0a7009d95013648':
    {
        'name': 'MySecondParameter',
        'value': 42
    }
    # ...
}
# Get run parameter IDs, names, and values.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    | jq "[.parameters[] | {id: .id, name: .name, value: .value.data}]"

Output should look like:

[
    {

        "id": "fd10fa8bafb642f29ca89542504970ae",
        "name": "MyFirstParameter",
        "value": "string_value"
    },
    {
        "id": "db4bc72e7a594ba9b0a7009d95013648",
        "name": "MySecondParameter",
        "value": 42
    }
]
# Get run details.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
run_details = response.json()

# Change parameter values.
run_params = run_details['parameters']
run_params_values = {
    p['id']: {
        'name': p['name'],
        'value': p['value']['data']
    } for p in run_params}
run_params_values[{parameter ID}]['value'] = 'new_string_value'
run_params_values[{parameter ID}]['value'] = 43
for p in run_details['parameters']:
    p['value']['data'] = run_params_values[p['id']]['value']

# Send changes and check new parameter values to verify the run setup.
headers = {'Content-Type': 'application/json'}
headers.update(auth_header)
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.patch(
    api_url.format(endpoint),
    headers=headers,
    json=run_details
)
new_params = response.json()['parameters']
new_params_values = {p['id']: {'name': p['name'], 'value': p['value']['data']} for p in run_params}

The new_params_values dictionary should contain new values:

{
    'fd10fa8bafb642f29ca89542504970ae':
    {
        'name': 'MyFirstParameter',
        'value': 'new_string_value'
    },
    'db4bc72e7a594ba9b0a7009d95013648':
    {
        'name': 'MySecondParameter',
        'value': 43
    }
}
# Get run details and save them to a file.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    --output run_details.json

# (Change parameter values in run_details.json, save changes).

# Send changes and check new parameter values to verify that it worked.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    -H "Content-Type: application/json" \
    --request PATCH \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    --data "@run_details.json" \
    | jq "[.parameters[] | {id: .id, name: .name, value: .value.data}]"

Output should contain new values:

[
    {

        "id": "fd10fa8bafb642f29ca89542504970ae",
        "name": "MyFirstParameter",
        "value": "new_string_value"
    },
    {
        "id": "db4bc72e7a594ba9b0a7009d95013648",
        "name": "MySecondParameter",
        "value": 43
    }
]

Run commands

The following API endpoints provide the run commands:

  • workflows/{workflow ID}/runs/{run ID}/run/ - the launch command
  • workflows/{workflow ID}/runs/{run ID}/interrupt/ - the break command

A POST request to workflows/{workflow ID}/runs/{run ID}/run/ starts the run. Run state should be CONFIGURATION. Upon receiving this request, run state immediately changes to QUEUED_TO_RUN, which means it is placed in the run queue but is not launched yet.

# Check run state.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
run_state = response.json()['state']

# Launch the run.
endpoint = 'workflows/{}/runs/{}/run/'.format(toy_wf['id'], toy_run['id'])
if run_state == 'CONFIGURATION':
    response = requests.post(
        api_url.format(endpoint),
        headers=auth_header
    )

Response data:

{
    "status": 200,
    "message": "Accepted"
}
# Check run state.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    | jq .state  # Should output "CONFIGURATION".

# Launch the run.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    --request POST \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/run/ \
    | jq .

Output:

{
    "status": 200,
    "message": "Accepted"
}

Once the run starts, its state changes to RUNNING. In this state you can exchange data with blocks in the workflow (see Messaging). Once the run completes, its state changes to FINISHED on success, or FAILED on error.

If you need to interrupt a run sooner than it finishes normally, send a POST request to workflows/{workflow ID}/runs/{run ID}/interrupt/. Run state must be RUNNING, and once the run stops, it changes to INTERRUPTED.

# Interrupt a run.
endpoint = 'workflows/{}/runs/{}/interrupt/'.format(toy_wf['id'], toy_run['id'])
response = requests.post(
    api_url.format(endpoint),
    headers=auth_header
)

Response data:

{
    "status": 200,
    "message": "Accepted"
}
# Interrupt a run.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    --request POST \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/interrupt/ \
    | jq .

Output:

{
    "status": 200,
    "message": "Accepted"
}

Run results

After a run finishes, you can get the values of the workflow result ports. The result values are available when the run state is FINISHED.

# Check run state.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
run_state = response.json()['state']

Run state in the response data should be:

'FINISHED'
# Check run state.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    | jq .state

Output should be:

"FINISHED"

Once the run reaches the FINISHED state, request the run details again. They now contain results data in the .results array, which earlier was empty. The .results array structure is similar to the .parameters array.

# Get names and values of run results from run details.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
run_results = response.json()['results']
run_results_values = [{'name': r['name'], 'value': r['value']['data']} for r in run_results]

The run_results_values list might be, for example:

[
    {
        'name': 'MyResult',
        'value': 99
    },
    {
        'name': 'MyAdditionalResult',
        'value': 'no_comments'
    },
    {
        'name': 'MyFirstParameter',
        'value': 'new_string_value'
    },
    {
        'name': 'MySecondParameter',
        'value': 43
    }
]
# Get names and values of run results from run details.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    | jq "[.results[] | {name: .name, value: .value.data}]"

Output:

[
    {
        "name": "MyResult",
        "value": 99
    },
    {
        "name": "MyAdditionalResult",
        "value": "no_comments"
    },
    {
        "name": "MyFirstParameter",
        "value": "new_string_value"
    },
    {
        "name": "MySecondParameter",
        "value": 43
    }
]

Note that a port can be both a parameter and a result port, so for some workflows you may find the input values of parameters in the .results array too. Those values are logged from ports in run-time, after the run has started. You can use them for some sanity checks - for example, to check that the run really accepted the parameter values you had set, you can have your REST client save the parameter values it sends, and compare those saved values with the parameter values found in results.

File download

You can download a file located somewhere inside the run directory (specified on start, see Run deployment) by sending a GET request to workflows/{workflow ID}/runs/{run ID}/download/ with the file path as the file query string parameter. The path is relative to the run directory.

# Get the results.json file from the run directory root.
file_path = 'results.json'

endpoint = 'workflows/{}/runs/{}/download/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header,
    params=(('file', file_path),),
)
with open(file_path, 'wb') as f:
    f.write(response.content)
# Get the results.json file from the run directory root.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/download/?file=results.json \
    --output results.json

Downloading the run log

If the run state is FINISHED, a diagnostic log is available for download. The log is a text file, which you can download by sending a GET request to workflows/{workflow ID}/runs/{run ID}/download/ (see File download). The log download full URL for a finished run can be found under the logfile_url key in run details.

# Get the log download URL from run details.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
toy_run['logfile_url'] = response.json()['logfile_url']

A run log download request URL looks like:

'https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/download/?file=.p7.log.txt'
# Get the log download URL from run details.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    | jq .logfile_url

Output:

"https://pseven.online/pseven/.rest/v2/workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/download/?file=.p7.log.txt"

Use the obtained URL in the download request.

# Download the run log and save it as {run name}.log
response = requests.get(
    toy_run['logfile_url'],
    headers=auth_header
)
with open(toy_run['name'] + '.log', 'wb') as f:
    f.write(response.content)
# Download the run log and save it as toy_run.log
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/download/?file=.p7.log.txt \
    --output toy_run.log

File upload

You can use the API to upload files to the run directory (specified when setting up the workflow run, see Run deployment). It is possible to upload to the directory root as well as to subfolders in the run directory.

Files can only be uploaded if the run state is CONFIGURATION.

# Check run state.
endpoint = 'workflows/{}/runs/{}/'.format(toy_wf['id'], toy_run['id'])
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
run_state = response.json()['state']

Run state in the response data should be:

'CONFIGURATION'
# Check run state.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/ \
    | jq .state

Output should be:

"CONFIGURATION"

In a run state other than CONFIGURATION, uploading files results in an error.

To upload files, send a POST request with the content type multipart/form-data to workflows/{workflow ID}/runs/{run ID}/upload/.

# Upload files and create folders in the run directory.

paths = ('settings.csv', 'info.txt')  # The files to upload.
upload_as = ('params.csv', None)  # Whether to rename files upon upload.
upload_to = 'run_setup'  # The upload destination subfolder in the run directory.
create_dirs = ('init', 'preprocess')  # The folders to create.

uploads = []
uploads.append(('destination',  upload_to))
for path, rename in zip(paths, upload_as):
    if rename:
        f = ('file', (rename, open(path, 'rb')))
    else:
        f = ('file', open(path, 'rb'))
    uploads.append(f)
for name in create_dirs:
    uploads.append(('directory', (None, name)))

# The uploads list contains:
# [
#     ('destination', (None, 'run_setup')),
#     ('file', ('params.csv', <open file 'settings.csv'>)),
#     ('file', <open file 'info.txt'>),
#     ('directory', (None, 'init')),
#     ('directory', (None, 'preprocess'))
# ]

endpoint = 'workflows/{}/runs/{}/upload/'.format(toy_wf['id'], toy_run['id'])
response = requests.post(
    api_url.format(endpoint),
    headers=auth_header,
    files=uploads
)

Response data:

{
    "status": 200,
    "message": "OK",
    "uploaded_files": [
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/params.csv",
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/info.txt"
    ],
    "uploaded_directories": [
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/init",
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/preprocess"
    ]
}
# Upload files and create folders in the run directory.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    -F "destination=run_setup" \
    -F "file=@settings.csv;filename=params.csv" \
    -F "file=@info.txt" \
    -F "directory=init" \
    -F "directory=preprocess" \
    ${DA__P7__REST_BASEURL}workflows/130d943e100e46e0b5da490ac005b70e/runs/ea51478af1c94fcea879d1bdd63faadc/upload/ \
    | jq .

Output:

{
    "status": 200,
    "message": "OK",
    "uploaded_files": [
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/params.csv",
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/info.txt"
    ],
    "uploaded_directories": [
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/init",
        "/Users/1/ToyWorkflow.p7wf/@Runs/#00000003.p7run/run_setup/preprocess"
    ]
}

The example request above:

  • Creates the run_setup destination folder in the run directory, if this folder does not exist
  • Uploads settings.csv to the run_setup folder as params.csv; overwrites run_setup/params.csv if it exists
  • Uploads info.txt to the run_setup folder; overwrites run_setup/info.txt if it exists
  • Creates the empty init and preprocess folders in run_setup, if those folders do not exist

The response always lists the files that were uploaded (the array under the uploaded_files key) and the empty folders that were created (the array under the uploaded_directories key). One of those arrays may be empty.

The upload request content fields are explained below.

destination

The upload destination.

  • Single field: you can specify only one upload destination
  • Optional: if you omit this field, the files are uploaded to the run directory root

The destination field content is a string specifying the relative or absolute path to the upload destination in the run directory. Use a relative path to upload to a subfolder in the run directory (the destination path is relative to the run directory root). You can use an absolute path, provided that the folder under that path is in the run directory; otherwise, the upload will fail. If the folder at the specified path does not exist, it will be created.

file

A file to upload.

  • Multiple field: you can upload multiple files in a single request
  • Optional: you may omit this field, if you specify at least one directory field
  • Attribute: filename, an optional attribute you can add to upload the file under a different name

The file field content is the file data to upload.

Filename conflicts are ignored: if a file with the same name exists in the upload destination, the file you upload overwrites the existing one.

The optional filename attribute sets the name of the target file in the upload destination.

The file field is optional, so you can use an upload request without any file fields to create subfolders in the run directory.

directory

A subfolder to create in the run directory.

  • Multiple field: you can create multiple folders with a single request
  • Optional: you may omit this field, if you specify at least one file field

The directory field content is a string specifying the folder name and path relative to the upload destination, or relative to the run directory root if you omit the destination field. If such a folder does not exist yet, the request creates an empty folder with the specified path and name.

Messaging

While a run is in progress, blocks can exchange messages with the REST client using one or more communication channels. A channel works as a pair of message queues:

  • Channel's platform-to-client queue holds messages sent to this channel by blocks that use it. The client can dequeue messages from here to read them.
  • Channel's client-to-platform queue holds messages sent to this channel by the client. Any block can dequeue and read these messages.

Channels are created on the fly when you specify a channel name in an API call. Named channels are optional: for simple tasks, you can omit the channel name parameter to use the default unnamed channel.

Block API

In a Python script block, import the api module, then use api.message_push() and api.message_pop().

api.message_push(message, channel=None)

Send a message from the block.

  • Parameters:
    • message (string) - message data
    • channel (string) - name of the channel to use
  • Returns: None

Pushes a message to the platform-to-client queue. Channel name is optional, uses the default channel if none specified.

api.message_pop(channel=None, timeout=0)

Receive a message on the block.

  • Parameters:
    • channel (string) - name of the channel to read from
    • timeout (int) - receive timeout in seconds, 0 for none
  • Returns: message (string)
  • Raises: TimeoutError if timeout exceeded

Pops a message from the client-to-platform queue. Channel name is optional: if not specified, reads the default channel.

Note that api.message_pop() is a blocking call: the block's script will wait until it returns a value. By default, there is no timeout, so the block waits for a new message indefinitely once it calls api.message_pop(). Usually this behavior is unwanted - for example, if the client gets disconnected, the client-to-platform queue remains empty, so the block is never going to finish. However, if you set a timeout, api.message_pop() method waits the specified number of seconds at most and raises a TimeoutError when it exceeds the timeout. Unless you handle that exception in your script, it stops the workflow run when raised.

Client API

Clients use the run's message_push/ and message_pop/ endpoints. Both of them accept POST requests with optional channel specification.

To send a message from the client, do a POST request to workflows/{workflow ID}/runs/{run ID}/message_push/. This request pushes a message to the client-to-platform queue. Request body should be a JSON in the following format: {"message": "message data", "channel": "channel name"}. The "channel" key is optional - if you omit it, the message is sent to the default channel.

Possible responses to the message push (send) request are:

  • Message accepted: 202 Accepted with an empty body
  • Error: error status code with a body containing error details (see Error Handling)

To get a message on the client, send a POST request to workflows/{workflow ID}/runs/{run ID}/message_pop/. This request pops a message from the platform-to-client queue. The request body may be a JSON specifying which channel to read: {"channel": "channel name"}. If the body is empty, a message is read from the default channel.

Possible responses to the message pop (receive) request are:

  • Got a message: 200 OK with a JSON body in the {"message": "message data"} format
  • No messages in queue: 204 No Content with an empty body
  • Error: error status code with a body containing error details (see Error Handling)

AppsHub API

You can use the pSeven Enterprise REST API to publish workflows as AppsHub apps (see Workflow publishing) and to run the published apps.

The app API has the same methods as the workflow API. The differences between those APIs are:

  • The app API base URL is {sign-in URL}/appshub/.rest/v2/.
  • The app API endpoint name prefix is apps. For example, apps/{app ID}/runs/ is the endpoint to get the list of app runs for that {app ID}.
# Get the list of published apps.
endpoint = 'apps/'
response = requests.get(
    api_url_appshub.format(endpoint),
    headers=auth_header
)
apps = response.json()

toy_app = apps[0]

# Deploy a new app run.
headers = {'Content-Type': 'application/json'}
headers.update(auth_header)
endpoint = 'apps/{}/runs/'.format(toy_app['id'])
response = requests.post(
    api_url_appshub.format(endpoint),
    headers=headers
)
toy_app_run = response.json()

# ...
# Get the list of published apps.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}apps/ \
    | jq .

# Deploy a new app run.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    --request POST \
    ${DA__P7__REST_BASEURL}apps/c53461be10b84975a4a6ecb755cfbe00/runs/ \
    -H "Content-Type: application/json" | jq .

# ...

You can get an app ID through the API (similar to the workflow ID), or from AppsHub: hover the app thumbnail and click the icon on the upper right. This copies the app URL with the app ID to the clipboard. For example, in the URL below c53461be10b84975a4a6ecb755cfbe00 is the app ID:

https://pseven.online/appshub/.rest/v2/apps/c53461be10b84975a4a6ecb755cfbe00/

Workflow publishing

You can use the AppsHub API to publish a workflow as an app: send a POST request to apps/, specifying the workflow to publish and the publishing settings.

# Publish a workflow as an app to AppsHub.

# Get a list of workflows.
endpoint = 'workflows/'
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)
workflows = response.json()

# Select a workflow and prepare to publish.
toy_wf = workflows[0]
settings = {
    # The app name, required. To update an existing app, specify its current name.
    'name': 'Toy app',
    # The workflow to publish, required.
    'workflow_path': toy_wf['path'],
    # A brief app description, optional.
    'description': 'A toy app to try publishing via REST.',
    # Version control, optional. Set None or omit the key to publish as a new version.
    # Specify an existing version number to update that version (add a new revision to it).
    'version': None,
    # Designate the published version as the default one (True) or keep the current default (False).
    # Ignored (assumed True) if you publish a new app: the initial version is always default.
    'set_version_as_default': False
    # Whether to allow users to copy the workflow from the app.
    'allow_get_as_workflow': False
}

app_thumb = open('thumbnail.png', 'rb')  # The app thumbnail, optional.

# Publish the workflow with the above settings and thumbnail.
endpoint = 'apps/'
response = requests.post(
    api_url_appshub.format(endpoint),
    headers=auth_header,
    data=settings,
    files={'thumbnail': app_thumb}
)
toy_app = response.json()

Response:

201 Created

Response data:

{
    "name": "Toy app",
    "path": "/Apps/Toy app.p7wf",
    "id": "130d943e100e46e0b5da490ac005b70e",
    "readme": null,
    "url": "https://pseven.online/appshub/.rest/v2/apps/130d943e100e46e0b5da490ac005b70e@1/",
    "presets": null,
    "parameters": null,
    "results": null,
    "runs": [],
    "ui_url": null,
    "app_url": null,
    "workflow": {...},
    "thumbnail": null,
    "fingerprint": null,
    "version": 1,
    "revision": 0,
    "updated": "2024-02-02T10:23:19.995035Z",
    "versions": [],
    "state": "QUEUED_TO_INITIALIZE"
}
# Get the workflow path by its index in the list of workflows.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/ \
    | jq .[0].path

# Output: "/Users/1/ToyWorkflow.p7wf"

# Publish the workflow and get the app ID, URL, and state.
$ curl \
    -X POST \
    -H "${DA__P7__REST_AUTHHEADER}" \
    -F "name=Toy app" \
    -F "workflow_path=/Users/1/ToyWorkflow.p7wf" \
    -F "description=A toy app to try publishing via REST." \
    -F "version=" \
    -F "set_version_as_default=False" \
    -F "allow_get_as_workflow=False" \
    -F "thumbnail=@./thumbnail.png" \
    ${DA__P7__REST_BASEURL_APPSHUB}apps/ \
    | jq .id,.url,.state

Output:

"130d943e100e46e0b5da490ac005b70e"
"https://pseven.online/appshub/.rest/v2/apps/130d943e100e46e0b5da490ac005b70e@1/"
"QUEUED_TO_INITIALIZE"

The response to the publish request will contain incomplete app details, providing the app URL and ID. The app state will be QUEUED_TO_INITIALIZE, which means it has just entered the publish queue and is not ready for use yet. Once the app publishing completes, the app state changes to READY, and its full details become available.

Before you run the newly published app, request its details and verify that the app state is READY, otherwise your app run deployment requests will be rejected.

# Get the app details, check the app state.
response = requests.get(
    toy_app['url'],
    headers=auth_header
)
toy_app = response.json()

if toy_app['state'] == 'READY':
    # The app has finished publishing, you can deploy a new app run.
else:
    # App publishing is still in progress, try later.
# Get the app details, check the app state.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    https://pseven.online/appshub/.rest/v2/apps/130d943e100e46e0b5da490ac005b70e/ \
    | jq .state

Output should be:

"READY"

To publish an app with a thumbnail, the publish request content type must be multipart/form-data. If you omit the thumbnail, the content type may be application/json or application/x-www-form-urlencoded.

The publish request content fields are explained below.

name

The app name, a string.

  • Single field
  • Required

App names are unique. If you have earlier published an app with this name, your request will overwrite the existing app.

workflow_path

The path to the workflow to publish, a string.

  • Single field
  • Required

The full path to the workflow in the pSeven Enterprise data storage. You should get it through the API by requesting the workflow details.

description

The app description, a string.

  • Single field
  • Optional

The brief description of the app displayed in the app list on the main AppsHub page.

thumbnail

The app thumbnail displayed in the app list on the main AppsHub page.

  • Single field
  • Optional

The thumbnail field content is the image file data. If you specify this field, the publish request content type must be multipart/form-data.

allow_get_as_workflow

Workflow copy permission, Boolean.

  • Single field
  • Optional

If you omit this field or set it True then, in addition to running the app, the AppsHub users will be able to copy the workflow you have published from the app to their user storage, edit that copy in pSeven Enterprise Studio, and distribute it.

If you set this field False, the AppsHub users will only be able to run the app.

Error handling

The pSeven Enterprise REST API uses standard HTTP status codes to indicate success or failure of an API call. Successful requests return 2xx class codes - typically 200 OK, unless otherwise noted in examples in this guide. Codes of the 4xx or 5xx class indicate an error. In this case, the response body contains error details.

# Unauthorized request (no authorization header).
endpoint = 'workflows/'
response = requests.get(
    api_url.format(endpoint),
    # headers=auth_header
)

Response:

HTTP/1.1 403 Forbidden

Response data:

{"detail":"Authentication credentials were not provided."}
# Unauthorized request (no authorization header).
$ curl \
    ${DA__P7__REST_BASEURL}workflows/ \
    -i -w "\n"

Output:

HTTP/1.1 403 Forbidden

{"detail":"Authentication credentials were not provided."}
# Attempt to get details of a workflow that does not exist.
wrong_id = '00000000000000000000000000000000'
endpoint = 'workflows/{}/'.format(wrong_id)
response = requests.get(
    api_url.format(endpoint),
    headers=auth_header
)

Response:

HTTP/1.1 404 Not Found

Response data:

{"detail":"Not found."}
# Attempt to get details of a workflow that does not exist.
$ curl \
    -H "${DA__P7__REST_AUTHHEADER}" \
    ${DA__P7__REST_BASEURL}workflows/00000000000000000000000000000000/ \
    -i -w "\n"

Output:

HTTP/1.1 404 Not Found

{"detail":"Not found."}