{ "cells": [ { "cell_type": "markdown", "id": "59d19f73", "metadata": {}, "source": [ "# Python API Example - Freight Route Data Import and Storage in Dataframe\n", "\n", "This guide is designed to provide an example of how to access the Spark API:\n", "- The path to your client credentials is the only input needed to run this script (just before Section 2)\n", "- This script has been designed to display the raw outputs of requests from the API, and how to format those outputs to enable easy reading and analysis\n", "- This script can be copy and pasted by customers for quick use of the API\n", "- Once comfortable with the process, you can change the variables that are called to produce your own custom analysis products. (Section 2 onwards in this guide).\n", "\n", "__N.B. This guide is just for Freight route data. If you're looking for other API data products (such as Price releases or Netbacks), please refer to their according code example files.__ " ] }, { "cell_type": "markdown", "id": "d1d7ca7a", "metadata": {}, "source": [ "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__\n", "\n", "or refer to our API website for more information about this endpoint: https://www.sparkcommodities.com/api/lng-freight/routes.html" ] }, { "cell_type": "markdown", "id": "9e00ae34", "metadata": {}, "source": [ "## 1. Importing Data\n", "\n", "Here we define the functions that allow us to retrieve the valid credentials to access the Spark API.\n", "\n", "This section can remain unchanged for most Spark API users." ] }, { "cell_type": "code", "execution_count": null, "id": "c32eb493", "metadata": {}, "outputs": [], "source": [ "import json\n", "import os\n", "import sys\n", "import pandas as pd\n", "import numpy as np\n", "from base64 import b64encode\n", "from pprint import pprint\n", "from urllib.parse import urljoin\n", "import datetime\n", "from io import StringIO\n", "\n", "\n", "try:\n", " from urllib import request, parse\n", " from urllib.error import HTTPError\n", "except ImportError:\n", " raise RuntimeError(\"Python 3 required\")\n", "\n", "\n", "API_BASE_URL = \"https://api.sparkcommodities.com\"\n", "\n", "\n", "def retrieve_credentials(file_path=None):\n", " \"\"\"\n", " Find credentials either by reading the client_credentials file or reading\n", " environment variables\n", " \"\"\"\n", " if file_path is None:\n", " client_id = os.getenv(\"SPARK_CLIENT_ID\")\n", " client_secret = os.getenv(\"SPARK_CLIENT_SECRET\")\n", " if not client_id or not client_secret:\n", " raise RuntimeError(\n", " \"SPARK_CLIENT_ID and SPARK_CLIENT_SECRET environment vars required\"\n", " )\n", " else:\n", " # Parse the file\n", " if not os.path.isfile(file_path):\n", " raise RuntimeError(\"The file {} doesn't exist\".format(file_path))\n", "\n", " with open(file_path) as fp:\n", " lines = [l.replace(\"\\n\", \"\") for l in fp.readlines()]\n", "\n", " if lines[0] in (\"clientId,clientSecret\", \"client_id,client_secret\"):\n", " client_id, client_secret = lines[1].split(\",\")\n", " else:\n", " print(\"First line read: '{}'\".format(lines[0]))\n", " raise RuntimeError(\n", " \"The specified file {} doesn't look like to be a Spark API client \"\n", " \"credentials file\".format(file_path)\n", " )\n", "\n", " print(\">>>> Found credentials!\")\n", " print(\n", " \">>>> Client_id={}, client_secret={}****\".format(client_id, client_secret[:5])\n", " )\n", "\n", " return client_id, client_secret\n", "\n", "\n", "def do_api_post_query(uri, body, headers):\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " data = json.dumps(body).encode(\"utf-8\")\n", "\n", " # HTTP POST request\n", " req = request.Request(url, data=data, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 201, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def do_api_get_query(uri, access_token, format='json'):\n", " \"\"\"\n", " After receiving an Access Token, we can request information from the API.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " if format == 'json':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"application/json\",\n", " }\n", " elif format == 'csv':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"text/csv\"\n", " }\n", " else:\n", " raise ValueError('The format parameter only takes `csv` or `json` as inputs')\n", "\n", " # HTTP GET request\n", " req = request.Request(url, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", " #status = response.status\n", "\n", " # The server must return HTTP 200. Raise an error if this is not the case\n", " assert response.status == 200, resp_content\n", "\n", " # Storing response based on requested format\n", " if format == 'json':\n", " content = json.loads(resp_content)\n", " elif format == 'csv':\n", " content = resp_content\n", "\n", " return content\n", "\n", "\n", "def get_access_token(client_id, client_secret):\n", " \"\"\"\n", " Get a new access_token. Access tokens are the thing that applications use to make\n", " API requests. Access tokens must be kept confidential in storage.\n", "\n", " # Procedure:\n", "\n", " Do a POST query with `grantType` and `scopes` in the body. A basic authorization\n", " HTTP header is required. The \"Basic\" HTTP authentication scheme is defined in\n", " RFC 7617, which transmits credentials as `clientId:clientSecret` pairs, encoded\n", " using base64.\n", " \"\"\"\n", "\n", " # Note: for the sake of this example, we choose to use the Python urllib from the\n", " # standard lib. One should consider using https://requests.readthedocs.io/\n", "\n", " payload = \"{}:{}\".format(client_id, client_secret).encode()\n", " headers = {\n", " \"Authorization\": \"Basic {}\".format(b64encode(payload).decode()),\n", " \"Accept\": \"application/json\",\n", " \"Content-Type\": \"application/json\",\n", " }\n", " body = {\n", " \"grantType\": \"clientCredentials\",\n", " }\n", "\n", " content = do_api_post_query(uri=\"/oauth/token/\", body=body, headers=headers)\n", "\n", " print(\n", " \">>>> Successfully fetched an access token {}****, valid {} seconds.\".format(\n", " content[\"accessToken\"][:5], content[\"expiresIn\"]\n", " )\n", " )\n", "\n", " return content[\"accessToken\"]\n" ] }, { "cell_type": "markdown", "id": "bec1f77b", "metadata": {}, "source": [ "## Defining Fetch Request\n", "\n", "Here is where we define what type of data we want to fetch from the API.\n", "\n", "In my fetch request, I use the URL:\n", "\n", "__uri=\"/v1.0/routes/\"__\n", "\n", "This is to query shipping route data specifically. Other data products (such as price releases) require different URL's in the fetch request (refer to other Python API examples)." ] }, { "cell_type": "code", "execution_count": null, "id": "fb27e003", "metadata": {}, "outputs": [], "source": [ "## Define function for listing routes from API\n", "def list_routes(access_token):\n", " \"\"\"\n", " Fetch available routes. Return route ids and Spark price release dates.\n", "\n", " The URI used returns a list of all available Spark routes. With these routes, we can find the price breakdown of a specific route.\n", "\n", " # Procedure:\n", "\n", " Do a GET query to /v1.0/routes/ with a Bearer token authorization HTTP header.\n", " \"\"\"\n", " content = do_api_get_query(uri=\"/v1.0/routes/\", access_token=access_token)\n", "\n", " print(\">>>> All the routes you can fetch\")\n", " tickers = []\n", " for contract in content[\"data\"][\"routes\"]:\n", " tickers.append(contract[\"uuid\"])\n", "\n", " reldates = content[\"data\"][\"sparkReleaseDates\"]\n", "\n", " dicto1 = content[\"data\"]\n", "\n", " return tickers, reldates, dicto1" ] }, { "cell_type": "markdown", "id": "1e890e9e", "metadata": {}, "source": [ "## N.B. Credentials\n", "\n", "Here we call the above functions, and input the file path to our credentials.\n", "\n", "N.B. You must have downloaded your client credentials CSV file before proceeding. Please refer to the API documentation if you have not dowloaded them already. Instructions for downloading your credentials can be found here:\n", "\n", "https://www.sparkcommodities.com/api/request/authentication.html\n", "\n", "The code then prints:\n", "- the number of callable routes available\n", "- the number of Spark freight price dates that are callable" ] }, { "cell_type": "code", "execution_count": null, "id": "51b8a89c", "metadata": {}, "outputs": [], "source": [ "## Input your file location here\n", "client_id, client_secret = retrieve_credentials(file_path=\"/tmp/client_credentials.csv\")\n", "\n", "# Authenticate:\n", "access_token = get_access_token(client_id, client_secret)\n", "\n", "# Fetch all contracts:\n", "routes, reldates, dicto1 = list_routes(access_token)" ] }, { "cell_type": "markdown", "id": "2345f33a", "metadata": {}, "source": [ "# 2. Describing available routes\n", "\n", "We have now saved all available routes as a dictionary. We can check how this looks, and then filter the routes by several characteristics." ] }, { "cell_type": "code", "execution_count": null, "id": "9a127827", "metadata": {}, "outputs": [], "source": [ "## Raw dictionary\n", "\n", "print(dicto1)" ] }, { "cell_type": "code", "execution_count": null, "id": "e75c2b64", "metadata": {}, "outputs": [], "source": [ "## Store route characteristics as a DataFrame\n", "\n", "def check_and_store_characteristics(dict1):\n", " \"\"\"\n", " # Store some of the route characteristics in lists, and check these lists are the same length\n", " # N.B. these are not all the characteristics available!\n", " # Check the Output of the raw dictionary (above) to see all available characteristics.\n", " \"\"\"\n", "\n", " routes_info = {\n", " \"UUID\": [],\n", " \"Load Location\": [],\n", " \"Discharge Location\": [],\n", " \"Via\": [],\n", " \"Load Region\": [],\n", " \"Discharge Region\": [],\n", " \"Load UUID\": [],\n", " \"Discharge UUID\": []\n", " }\n", " for route in dict1[\"routes\"]:\n", " \n", " routes_info['UUID'].append(route[\"uuid\"])\n", " routes_info['Via'].append(route[\"via\"])\n", "\n", " routes_info['Load UUID'].append(route[\"loadPort\"][\"uuid\"])\n", " routes_info['Load Location'].append(route[\"loadPort\"][\"name\"])\n", " routes_info['Load Region'].append(route[\"loadPort\"][\"region\"])\n", "\n", " routes_info['Discharge UUID'].append(route[\"dischargePort\"][\"uuid\"])\n", " routes_info['Discharge Location'].append(route[\"dischargePort\"][\"name\"])\n", " routes_info['Discharge Region'].append(route[\"dischargePort\"][\"region\"])\n", " \n", " \n", " route_df = pd.DataFrame(routes_info)\n", "\n", " return route_df\n" ] }, { "cell_type": "markdown", "id": "b53ee7af", "metadata": {}, "source": [ "### Exploring the data\n" ] }, { "cell_type": "code", "execution_count": null, "id": "6db5808e", "metadata": {}, "outputs": [], "source": [ "## We use the stored route characteristics to create the dataframe\n", "route_df = check_and_store_characteristics(dicto1)\n", "\n", "route_df.head()" ] }, { "cell_type": "markdown", "id": "2854904d", "metadata": {}, "source": [ "# 3. Analysing a Specific Route\n", "\n", "\n", "Here we define the function that allows us to pull data for a specific route and release date.\n", "\n", "We then define a given route ID ('my_route') and release date ('my_release') below the function, and these values are printed out for the user to check the parameters.\n", "\n", "\n", "__NOTE:__ The 'congestion_laden' and 'congestion_ballast' options are limited to the following combinations: (0,0), (1,1), (4,4), (7,7), (10,10), (15,15), (20,20). Both parameters must have the same value." ] }, { "cell_type": "code", "execution_count": null, "id": "c7502772", "metadata": {}, "outputs": [], "source": [ "## Defining the function\n", "\n", "def fetch_route_data(access_token, ticker, release, congestion_laden= None, congestion_ballast= None):\n", " \"\"\"\n", " For a route, fetch then display the route details\n", "\n", " # Procedure:\n", "\n", " Do GET queries to https://api.sparkcommodities.com/v1.0/routes/{route_uuid}/\n", " with a Bearer token authorization HTTP header.\n", " \"\"\"\n", "\n", " query_params = \"?release-date={}\".format(release)\n", " if congestion_laden is not None:\n", " query_params += \"&congestion-laden-days={}\".format(congestion_laden)\n", " if congestion_ballast is not None:\n", " query_params += \"&congestion-ballast-days={}\".format(congestion_ballast)\n", "\n", " uri = \"/v1.0/routes/{}/{}\".format(ticker, query_params)\n", " print(uri)\n", "\n", " content = do_api_get_query(\n", " uri=\"/v1.0/routes/{}/{}\".format(ticker, query_params),\n", " access_token=access_token,\n", " )\n", "\n", " my_dict = content[\"data\"]\n", "\n", " print(\">>>> Get route information for {}\".format(ticker))\n", "\n", " return my_dict" ] }, { "cell_type": "markdown", "id": "78ca4bb9", "metadata": {}, "source": [ "### N.B. Plan Limits\n", "\n", "__Premium__ Users can choose any release date, as they have full access to the dataset.\n", "\n", "__Trial__ Users must choose a release date within the last 2 weeks." ] }, { "cell_type": "code", "execution_count": null, "id": "8d57eaef", "metadata": {}, "outputs": [], "source": [ "## Calling that function and storing the output\n", "\n", "# Here we store the entire dataset called from the API\n", "\n", "load = 'Sabine Pass'\n", "discharge = 'Futtsu'\n", "via = 'cogh'\n", "\n", "my_route = route_df[(route_df[\"Load Location\"] == load) & \\\n", " (route_df[\"Discharge Location\"] == discharge) & \\\n", " (route_df['Via'] == via)]['UUID'].values[0]\n", "\n", "my_release = '2024-09-25'\n", "\n", "my_dict = fetch_route_data(access_token, my_route, release=my_release)" ] }, { "cell_type": "code", "execution_count": null, "id": "f70e1b63", "metadata": {}, "outputs": [], "source": [ "## Calling that dictionary to see how it is structured\n", "\n", "my_dict" ] }, { "cell_type": "code", "execution_count": null, "id": "287ed2ca", "metadata": {}, "outputs": [], "source": [ "## Define a variable storing the route start-end\n", "route_name = my_dict[\"name\"]" ] }, { "cell_type": "markdown", "id": "5b025e65", "metadata": {}, "source": [ "### Storing Data as a DataFrame\n", "\n", "We extract some relevant data for the chosen route, including the spot price and forward prices. These are stored in a Pandas Dataframe for readability and ease of use." ] }, { "cell_type": "code", "execution_count": null, "id": "74ef56d3", "metadata": {}, "outputs": [], "source": [ "## Defining the function to store as dataframe\n", "def organise_dataframe(my_dict):\n", " my_route = {\n", " \"Period\": [],\n", " \"Start Date\": [],\n", " \"End Date\": [],\n", " \"Cost in USD\": [],\n", " \"Cost in USDperMMBtu\": [],\n", " \"Hire Cost in USD\": [],\n", " }\n", "\n", " for data in my_dict[\"dataPoints\"]:\n", " my_route['Start Date'].append(data[\"deliveryPeriod\"][\"startAt\"])\n", " my_route['End Date'].append(data[\"deliveryPeriod\"][\"endAt\"])\n", " my_route['Period'].append(data[\"deliveryPeriod\"][\"name\"])\n", "\n", " my_route['Cost in USD'].append(data[\"costsInUsd\"][\"total\"])\n", " my_route['Cost in USDperMMBtu'].append(data[\"costsInUsdPerMmbtu\"][\"total\"])\n", "\n", " my_route['Hire Cost in USD'].append(data[\"costsInUsd\"][\"hire\"])\n", "\n", "\n", " my_route_df = pd.DataFrame(my_route)\n", "\n", "\n", " ## Changing the data type of these columns from 'string' to numbers.\n", " ## This allows us to easily plot a forward curve, as well as perform statistical analysis on the prices.\n", " my_route_df[\"Cost in USD\"] = pd.to_numeric(my_route_df[\"Cost in USD\"])\n", " my_route_df[\"Hire Cost in USD\"] = pd.to_numeric(my_route_df[\"Hire Cost in USD\"])\n", " my_route_df[\"Cost in USDperMMBtu\"] = pd.to_numeric(my_route_df[\"Cost in USDperMMBtu\"])\n", " \n", " return my_route_df" ] }, { "cell_type": "code", "execution_count": null, "id": "6272154a", "metadata": {}, "outputs": [], "source": [ "my_route_df = organise_dataframe(my_dict)\n", "my_route_df.head()" ] }, { "cell_type": "markdown", "id": "265dda49", "metadata": {}, "source": [ "# Panama Canal Congestion\n", "\n", "The Spark API allows you to account for congestion delays for any route passing through the Panama canal. This is done via an optional query parameter in the __'fetch_route_data'__ function - 'congestion'.\n", "\n", "- Set the congestion parameter to the amount of delay days needed\n", " - This should be given as an integer: e.g. congestion = 5\n", "- If the congestion parameter is not specified, like in the examples above, the congestion value is set to the default value of 0.\n", "- If the congestion parameter is called for a route that does not go through the Panama canal, then a 404 error will be triggered\n", "\n", "Below is an example of using this congestion parameter." ] }, { "cell_type": "code", "execution_count": null, "id": "f451dec0", "metadata": {}, "outputs": [], "source": [ "## First, check which routes go via the Panama canal\n", "\n", "route_df[route_df[\"Via\"] == \"panama\"].head()" ] }, { "cell_type": "markdown", "id": "bff07798", "metadata": {}, "source": [ "### N.B. Congestion Days Options\n", "\n", "The 'congestion_laden' and 'congestion_ballast' option are limited to the following combinations: (0,0), (1,1), (4,4), (7,7), (10,10), (15,15), (20,20). Both parameters must have the same value.\n", "\n", "### N.B. Plan Limits\n", "\n", "__Premium__ Users can choose any release date, as they have full access to the dataset.\n", "\n", "__Trial__ Users must choose a release date within the last 2 weeks, ie. 'reldates[13]' is the earliest date possible." ] }, { "cell_type": "code", "execution_count": null, "id": "5fd0df54", "metadata": {}, "outputs": [], "source": [ "# Specify which route we want to use ('cong_route'): we can find specific routes by filtering the dataframe.\n", "# as well as release date ('cong_release')\n", "# and amount of congestion days (cong_days)\n", "\n", "cong_route = route_df[(route_df['Via']=='panama') & \\\n", " (route_df['Load Location'] == 'Sabine Pass') & \\\n", " (route_df['Discharge Location'] == 'Futtsu')]['UUID'].tolist()[0]\n", "cong_release = reldates[8]\n", "cong_days_laden = 4\n", "cong_days_ballast = 4\n", "\n", "# Fetch the route data with these specifications\n", "cong_dict = fetch_route_data(\n", " access_token, cong_route, release=cong_release, congestion_laden=cong_days_laden, congestion_ballast=cong_days_ballast\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "87380280", "metadata": {}, "outputs": [], "source": [ "# Fetching data for the same route but without congestion delays ('nocong_dict'), for comparison\n", "nocong_dict = fetch_route_data(access_token, cong_route, release=cong_release)\n", "\n", "# Save the name of the route\n", "congroute_name = nocong_dict[\"name\"]" ] }, { "cell_type": "code", "execution_count": null, "id": "af51ebe7", "metadata": {}, "outputs": [], "source": [ "# Call the 'organise_dataframe' function to organise the dictionary into a readable dataframe.\n", "# Applying function to dictionaries\n", "cong_df = organise_dataframe(cong_dict)\n", "nocong_df = organise_dataframe(nocong_dict)" ] }, { "cell_type": "code", "execution_count": null, "id": "732aca68", "metadata": {}, "outputs": [], "source": [ "cong_df.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "b3bd0178", "metadata": {}, "outputs": [], "source": [ "nocong_df.head()" ] }, { "cell_type": "markdown", "id": "a51a571c", "metadata": {}, "source": [ "# Analytics Gallery\n", "\n", "Want to gain market insights using our data?\n", "\n", "Take a look at our [Analytics Gallery](https://www.sparkcommodities.com/api/code-examples/analytics-examples.html) on the Spark API website, which includes:\n", "\n", "- __Routes Contract Month Evolution & Seasonality__ - For a month of interest, track how that month has priced in historically (e.g. Dec22 vs Dec23 vs Dec24), providing insight into how the current year's contract (e.g. Dec25) might price in over the coming months for a Route of your choice.\n", "\n", "Want to create meaningful charts using our data?\n", "\n", "View our Route Seasonality Chart [here](https://www.sparkcommodities.com/api/code-examples/analytics-examples.html). \n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "base", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.5" } }, "nbformat": 4, "nbformat_minor": 5 }