{ "cells": [ { "cell_type": "markdown", "id": "59d19f73", "metadata": {}, "source": [ "# US Arb Freight Breakevens vs Spot Freight Rates\n", "\n", "This script is used to compare the US Arb Freight Breakevens with Spark30S Spot Freight Rates.\n", "\n", "For a full explanation of how to import our Arb Breakevens or Spark30S data, please refer to our Python Jupyter Notebook Code Samples:\n", "\n", "https://www.sparkcommodities.com/api/code-examples/jupyter.html\n", "\n" ] }, { "cell_type": "markdown", "id": "b0a05be4", "metadata": {}, "source": [ "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__" ] }, { "cell_type": "markdown", "id": "9e00ae34", "metadata": {}, "source": [ "## 1. Importing Data\n", "\n", "Here we define the functions that allow us to retrieve the valid credentials to access the Spark API.\n", "\n", "__This section can remain unchanged for most Spark API users.__" ] }, { "cell_type": "code", "execution_count": null, "id": "1161e807", "metadata": {}, "outputs": [], "source": [ "import json\n", "import os\n", "import sys\n", "import pandas as pd\n", "import numpy as np\n", "from base64 import b64encode\n", "from pprint import pprint\n", "from urllib.parse import urljoin\n", "import datetime\n", "from io import StringIO\n", "\n", "\n", "try:\n", " from urllib import request, parse\n", " from urllib.error import HTTPError\n", "except ImportError:\n", " raise RuntimeError(\"Python 3 required\")\n", "\n", "\n", "API_BASE_URL = \"https://api.sparkcommodities.com\"\n", "\n", "\n", "def retrieve_credentials(file_path=None):\n", " \"\"\"\n", " Find credentials either by reading the client_credentials file or reading\n", " environment variables\n", " \"\"\"\n", " if file_path is None:\n", " client_id = os.getenv(\"SPARK_CLIENT_ID\")\n", " client_secret = os.getenv(\"SPARK_CLIENT_SECRET\")\n", " if not client_id or not client_secret:\n", " raise RuntimeError(\n", " \"SPARK_CLIENT_ID and SPARK_CLIENT_SECRET environment vars required\"\n", " )\n", " else:\n", " # Parse the file\n", " if not os.path.isfile(file_path):\n", " raise RuntimeError(\"The file {} doesn't exist\".format(file_path))\n", "\n", " with open(file_path) as fp:\n", " lines = [l.replace(\"\\n\", \"\") for l in fp.readlines()]\n", "\n", " if lines[0] in (\"clientId,clientSecret\", \"client_id,client_secret\"):\n", " client_id, client_secret = lines[1].split(\",\")\n", " else:\n", " print(\"First line read: '{}'\".format(lines[0]))\n", " raise RuntimeError(\n", " \"The specified file {} doesn't look like to be a Spark API client \"\n", " \"credentials file\".format(file_path)\n", " )\n", "\n", " print(\">>>> Found credentials!\")\n", "\n", " return client_id, client_secret\n", "\n", "\n", "def do_api_post_query(uri, body, headers):\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " data = json.dumps(body).encode(\"utf-8\")\n", "\n", " # HTTP POST request\n", " req = request.Request(url, data=data, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 201, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def do_api_get_query(uri, access_token, format='json'):\n", " \"\"\"\n", " After receiving an Access Token, we can request information from the API.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " if format == 'json':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"application/json\",\n", " }\n", " elif format == 'csv':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"text/csv\"\n", " }\n", " else:\n", " raise ValueError('The format parameter only takes `csv` or `json` as inputs')\n", "\n", " # HTTP GET request\n", " req = request.Request(url, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", " #status = response.status\n", "\n", " # The server must return HTTP 200. Raise an error if this is not the case\n", " assert response.status == 200, resp_content\n", "\n", " # Storing response based on requested format\n", " if format == 'json':\n", " content = json.loads(resp_content)\n", " elif format == 'csv':\n", " content = resp_content\n", "\n", " return content\n", "\n", "\n", "def get_access_token(client_id, client_secret):\n", " \"\"\"\n", " Get a new access_token. Access tokens are the thing that applications use to make\n", " API requests. Access tokens must be kept confidential in storage.\n", "\n", " # Procedure:\n", "\n", " Do a POST query with `grantType` and `scopes` in the body. A basic authorization\n", " HTTP header is required. The \"Basic\" HTTP authentication scheme is defined in\n", " RFC 7617, which transmits credentials as `clientId:clientSecret` pairs, encoded\n", " using base64.\n", " \"\"\"\n", "\n", " # Note: for the sake of this example, we choose to use the Python urllib from the\n", " # standard lib. One should consider using https://requests.readthedocs.io/\n", "\n", " payload = \"{}:{}\".format(client_id, client_secret).encode()\n", " headers = {\n", " \"Authorization\": \"Basic {}\".format(b64encode(payload).decode()),\n", " \"Accept\": \"application/json\",\n", " \"Content-Type\": \"application/json\",\n", " }\n", " body = {\n", " \"grantType\": \"clientCredentials\",\n", " }\n", "\n", " content = do_api_post_query(uri=\"/oauth/token/\", body=body, headers=headers)\n", "\n", " print(\n", " \">>>> Successfully fetched an access token {}****, valid {} seconds.\".format(\n", " content[\"accessToken\"][:5], content[\"expiresIn\"]\n", " )\n", " )\n", "\n", " return content[\"accessToken\"]\n" ] }, { "cell_type": "code", "execution_count": null, "id": "ada4f167", "metadata": {}, "outputs": [], "source": [ "# Define the function for listing all netbacks\n", "def list_netbacks(access_token):\n", "\n", " content = do_api_get_query(\n", " uri=\"/v1.0/netbacks/reference-data/\", access_token=access_token\n", " )\n", "\n", " print(\">>>> All the routes you can fetch\")\n", " tickers = []\n", " fobPort_names = []\n", "\n", " availablevia = []\n", "\n", " for contract in content[\"data\"][\"staticData\"][\"fobPorts\"]:\n", " tickers.append(contract[\"uuid\"])\n", " fobPort_names.append(contract[\"name\"])\n", "\n", " availablevia.append(contract[\"availableViaPoints\"])\n", "\n", " reldates = content[\"data\"][\"staticData\"][\"sparkReleases\"]\n", "\n", " dicto1 = content[\"data\"]\n", "\n", " return tickers, fobPort_names, availablevia, reldates, dicto1" ] }, { "cell_type": "markdown", "id": "b46f962b", "metadata": {}, "source": [ "## N.B. Credentials\n", "\n", "Here we call the above functions, and input the file path to our credentials.\n", "\n", "N.B. You must have downloaded your client credentials CSV file before proceeding. Please refer to the API documentation if you have not dowloaded them already. Instructions for downloading your credentials can be found here:\n", "\n", "https://www.sparkcommodities.com/api/request/authentication.html" ] }, { "cell_type": "code", "execution_count": null, "id": "3acdfe86", "metadata": {}, "outputs": [], "source": [ "# Input the path to your client credentials here\n", "client_id, client_secret = retrieve_credentials(file_path=\"/tmp/client_credentials.csv\")\n", "\n", "# Authenticate:\n", "access_token = get_access_token(client_id, client_secret)\n", "print(access_token)" ] }, { "cell_type": "markdown", "id": "7adcc5ed", "metadata": {}, "source": [ "### Fetching Netbacks reference data" ] }, { "cell_type": "code", "execution_count": null, "id": "aaefce45", "metadata": {}, "outputs": [], "source": [ "# Fetch all contracts:\n", "tickers, fobPort_names, availablevia, reldates, dicto1 = list_netbacks(access_token)\n", "\n", "# Define formatting data function\n", "def format_store(available_via, fob_names, tickrs):\n", " dict_store = {\n", " \"Index\": [],\n", " \"Ports\": [],\n", " \"Ticker\": [],\n", " \"Available Via\": []\n", " }\n", " \n", " c = 0\n", " for a in available_via:\n", " ## Check which routes have non-empty Netbacks data and save indices\n", " if len(a) != 0:\n", " dict_store['Index'].append(c)\n", "\n", " # Use these indices to retrive the corresponding Netbacks info\n", " dict_store['Ports'].append(fob_names[c])\n", " dict_store['Ticker'].append(tickrs[c])\n", " dict_store['Available Via'].append(available_via[c])\n", " c += 1\n", " # Show available Netbacks ports in a DataFrame (with corresponding indices)\n", " dict_df = pd.DataFrame(dict_store)\n", " return dict_df\n", "\n", "# Run formatting data function\n", "available_df = format_store(availablevia,fobPort_names,tickers)" ] }, { "cell_type": "markdown", "id": "e447d6b2", "metadata": {}, "source": [ "## Fetching Netback UUID for an individual port" ] }, { "cell_type": "code", "execution_count": null, "id": "a4480909", "metadata": {}, "outputs": [], "source": [ "# Choose route ID and price release date\n", "\n", "# Here we define which port we want\n", "port = \"Sabine Pass\"\n", "ti = int(available_df[available_df[\"Ports\"] == port][\"Index\"])\n", "my_ticker = tickers[ti]\n", "my_via = 'cogh'\n", "\n", "print(my_ticker)" ] }, { "cell_type": "markdown", "id": "2d1bf4c0", "metadata": {}, "source": [ "## Data Import \n" ] }, { "cell_type": "code", "execution_count": null, "id": "eb563eb4", "metadata": {}, "outputs": [], "source": [ "from io import StringIO\n", "\n", "def fetch_breakevens(access_token, ticker, via=None, breakeven='freight', start=None, end=None, format='json'):\n", "\n", "\n", " query_params = breakeven + '/' + \"?fob-port={}\".format(ticker)\n", "\n", " if via is not None:\n", " query_params += \"&via-point={}\".format(via)\n", " if start is not None:\n", " query_params += \"&start={}\".format(start)\n", " if end is not None:\n", " query_params += \"&end={}\".format(end)\n", "\n", " uri = \"/v1.0/netbacks/arb-breakevens/{}\".format(query_params)\n", " print(uri)\n", " content = do_api_get_query(\n", " uri=\"/v1.0/netbacks/arb-breakevens/{}\".format(query_params),\n", " access_token=access_token, format=format,\n", " )\n", " \n", " if format == 'json':\n", " my_dict = content['data']\n", " else:\n", " my_dict = content.decode('utf-8')\n", " my_dict = pd.read_csv(StringIO(my_dict))\n", "\n", " return my_dict\n" ] }, { "cell_type": "code", "execution_count": null, "id": "30d317eb", "metadata": {}, "outputs": [], "source": [ "# Fetching data in CSV format\n", "\n", "## Calling that function and storing the output - JSON version\n", "break_df = fetch_breakevens(access_token, my_ticker, via=my_via, breakeven='freight', format='csv')\n", "\n", "break_df['ReleaseDate'] = pd.to_datetime(break_df['ReleaseDate'])\n", "break_df = break_df.rename(columns={'ReleaseDate': 'Release Date'})" ] }, { "cell_type": "code", "execution_count": null, "id": "7a80e89c", "metadata": {}, "outputs": [], "source": [ "length = len(break_df['Release Date'].unique())" ] }, { "cell_type": "markdown", "id": "de4957ed", "metadata": {}, "source": [ "# Fetching Spot Freight Prices" ] }, { "cell_type": "code", "execution_count": null, "id": "08ec1036", "metadata": {}, "outputs": [], "source": [ "def fetch_historical_price_releases(access_token, ticker, limit=4, offset=None, vessel=None):\n", "\n", " query_params = \"?limit={}\".format(limit)\n", " if offset is not None:\n", " query_params += \"&offset={}\".format(offset)\n", " \n", " # '174-2stroke' or '160-tfde'\n", " if vessel is not None:\n", " query_params += \"&vessel-type={}\".format(vessel)\n", " \n", " print(\"/v1.0/contracts/{}/price-releases/{}\".format(ticker, query_params))\n", " \n", " content = do_api_get_query(\n", " uri=\"/v1.0/contracts/{}/price-releases/{}\".format(ticker, query_params),\n", " access_token=access_token,\n", " )\n", "\n", " my_dict = content['data']\n", " \n", " return my_dict\n" ] }, { "cell_type": "code", "execution_count": null, "id": "64897525", "metadata": {}, "outputs": [], "source": [ "import datetime\n", "def fetch_prices(ticker, my_lim, my_vessel=None):\n", " my_dict_hist = fetch_historical_price_releases(access_token, ticker, limit=my_lim, vessel=my_vessel)\n", " \n", " release_dates = []\n", " period_start = []\n", " ticker = []\n", " usd_day = []\n", "\n", " day_min = []\n", " day_max = []\n", " cal_month = []\n", "\n", " for release in my_dict_hist:\n", " release_date = release[\"releaseDate\"]\n", " ticker.append(release['contractId'])\n", "\n", " release_dates.append(release_date)\n", "\n", " data_points = release[\"data\"][0][\"dataPoints\"]\n", "\n", " for data_point in data_points:\n", " period_start_at = data_point[\"deliveryPeriod\"][\"startAt\"]\n", " period_start.append(period_start_at)\n", "\n", " usd_day.append(data_point['derivedPrices']['usdPerDay']['spark'])\n", " day_min.append(data_point['derivedPrices']['usdPerDay']['sparkMin'])\n", " day_max.append(data_point['derivedPrices']['usdPerDay']['sparkMax'])\n", "\n", "\n", "\n", "\n", " ## Storing values in a Pandas DataFrame\n", "\n", " historical_df = pd.DataFrame({\n", " 'Release Date': release_dates,\n", " 'ticker': ticker,\n", " 'Period Start': period_start,\n", " 'USDperday': usd_day,\n", " 'USDperdayMax': day_max,\n", " 'USDperdayMin': day_min})\n", "\n", " historical_df['USDperday'] = pd.to_numeric(historical_df['USDperday'])\n", " historical_df['USDperdayMax'] = pd.to_numeric(historical_df['USDperdayMax'])\n", " historical_df['USDperdayMin'] = pd.to_numeric(historical_df['USDperdayMin'])\n", "\n", " historical_df['Release Date'] = pd.to_datetime(historical_df['Release Date'])\n", " \n", " return historical_df" ] }, { "cell_type": "code", "execution_count": null, "id": "efcb7515", "metadata": {}, "outputs": [], "source": [ "freight_ticker = 'spark30s'\n", "\n", "spark30_174 = fetch_prices(freight_ticker, length, my_vessel='174-2stroke')\n", "\n", "print(\"freight_df columns:\", spark30_174.columns.tolist())" ] }, { "cell_type": "markdown", "id": "3567b011", "metadata": {}, "source": [ "# Plotting" ] }, { "cell_type": "code", "execution_count": null, "id": "1fd3546a", "metadata": {}, "outputs": [], "source": [ "# fetch front month breakevens only\n", "front_df = break_df[break_df['LoadMonthIndex']==\"M+1\"]" ] }, { "cell_type": "code", "execution_count": null, "id": "6c736ee6", "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "sns.set_style(\"whitegrid\")\n", "\n", "fig, ax = plt.subplots(figsize=(15,7))\n", "\n", "ax.plot(spark30_174['Release Date'],spark30_174['USDperday'], color = '#48C38D', linewidth=2.5, label='Spark30S (Atlantic)')\n", "ax.plot(front_df['Release Date'],front_df['FreightBreakeven'], color='#4F41F4', linewidth=2, label='US Arb [M+1] Freight Breakeven Level')\n", "\n", "ax.set_xlim(datetime.datetime.today() - datetime.timedelta(days=380), datetime.datetime.today())\n", "ax.set_ylim(-100000, 120000)\n", "\n", "plt.title('Spark30S (Atlantic) vs. US Arb [M+1] Freight Breakeven Level')\n", "\n", "sns.despine(left=True, bottom=True)\n", "plt.grid(True)\n", "plt.legend()" ] }, { "cell_type": "markdown", "id": "e5002787", "metadata": {}, "source": [ "## Plotting with Conditional Shading" ] }, { "cell_type": "code", "execution_count": null, "id": "76361e74", "metadata": {}, "outputs": [], "source": [ "spark30_174['Release Date'] = pd.to_datetime(spark30_174['Release Date'])\n", "merge_df = pd.merge(spark30_174, front_df, left_on='Release Date', right_on='Release Date', how='inner')" ] }, { "cell_type": "code", "execution_count": null, "id": "328faa9b", "metadata": {}, "outputs": [], "source": [ "fig2, ax2 = plt.subplots(figsize=(15,7))\n", "\n", "ax2.plot(merge_df['Release Date'],merge_df['USDperday'], color = '#48C38D', linewidth=2.5, label='Spark30S (Atlantic)')\n", "ax2.plot(merge_df['Release Date'],merge_df['FreightBreakeven'], color='#4F41F4', linewidth=2, label='US Arb [M+1] Freight Breakeven Level')\n", "\n", "ax2.fill_between(merge_df['Release Date'], merge_df['USDperday'], merge_df['FreightBreakeven'], \\\n", " where=merge_df['USDperday']>merge_df['FreightBreakeven'], facecolor='red', interpolate=True, alpha=0.05)\n", "\n", "ax2.fill_between(merge_df['Release Date'], merge_df['USDperday'], merge_df['FreightBreakeven'], \\\n", " where=merge_df['USDperday']