{ "cells": [ { "cell_type": "markdown", "id": "25c8380d", "metadata": {}, "source": [ "# FFA Seasonality Charts\n", "\n", "This script allows you to plot seasonality charts for a specific contract month from our Spark30FFA and Spark25FFA freight rates.\n", "\n", "This script uses elements from our API code samples. If you'd like a more basic and informative example of how to pull data via the Spark API, please visit our Github or API website:\n", "\n", "- Github: https://github.com/spark-commodities/api-code-samples/blob/master/jupyter_notebooks/\n", "- API Website: https://www.sparkcommodities.com/api/code-examples/jupyter.html\n", "\n", "\n", "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__\n", "\n", "or refer to our API website for more information about this endpoint:\n", "https://www.sparkcommodities.com/api/request/contracts.html" ] }, { "cell_type": "markdown", "id": "fe1a7ca7", "metadata": {}, "source": [ "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__" ] }, { "cell_type": "markdown", "id": "c5716130", "metadata": {}, "source": [ "## 1. Importing Data\n", "\n", "Here we define the functions that allow us to retrieve the valid credentials to access the Spark API.\n", "\n", "This section can remain unchanged for most Spark API users." ] }, { "cell_type": "code", "execution_count": 18, "id": "fcb94776", "metadata": {}, "outputs": [], "source": [ "# importing packages for calling the API\n", "import json\n", "import os\n", "import sys\n", "from base64 import b64encode\n", "from pprint import pprint\n", "from urllib.parse import urljoin\n", "from datetime import datetime\n", "import pandas as pd\n", "\n", "try:\n", " from urllib import request, parse\n", " from urllib.error import HTTPError\n", "except ImportError:\n", " raise RuntimeError(\"Python 3 required\")\n", "\n", "import warnings\n", "\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "code", "execution_count": 19, "id": "9a0e42dc", "metadata": {}, "outputs": [], "source": [ "# Defining functions for API request\n", "\n", "\n", "API_BASE_URL = \"https://api.sparkcommodities.com\"\n", "\n", "\n", "def retrieve_credentials(file_path=None):\n", " \"\"\"\n", " Find credentials either by reading the client_credentials file or reading\n", " environment variables\n", " \"\"\"\n", " if file_path is None:\n", " client_id = os.getenv(\"SPARK_CLIENT_ID\")\n", " client_secret = os.getenv(\"SPARK_CLIENT_SECRET\")\n", " if not client_id or not client_secret:\n", " raise RuntimeError(\n", " \"SPARK_CLIENT_ID and SPARK_CLIENT_SECRET environment vars required\"\n", " )\n", " else:\n", " # Parse the file\n", " if not os.path.isfile(file_path):\n", " raise RuntimeError(\"The file {} doesn't exist\".format(file_path))\n", "\n", " with open(file_path) as fp:\n", " lines = [l.replace(\"\\n\", \"\") for l in fp.readlines()]\n", "\n", " if lines[0] in (\"clientId,clientSecret\", \"client_id,client_secret\"):\n", " client_id, client_secret = lines[1].split(\",\")\n", " else:\n", " print(\"First line read: '{}'\".format(lines[0]))\n", " raise RuntimeError(\n", " \"The specified file {} doesn't look like to be a Spark API client \"\n", " \"credentials file\".format(file_path)\n", " )\n", "\n", " print(\">>>> Found credentials!\")\n", " print(\n", " \">>>> Client_id={}, client_secret={}****\".format(client_id, client_secret[:5])\n", " )\n", "\n", " return client_id, client_secret\n", "\n", "\n", "def do_api_post_query(uri, body, headers):\n", " \"\"\"\n", " OAuth2 authentication requires a POST request with client credentials before accessing the API. \n", " This POST request will return an Access Token which will be used for the API GET request.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " data = json.dumps(body).encode(\"utf-8\")\n", "\n", " # HTTP POST request\n", " req = request.Request(url, data=data, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 201, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def do_api_get_query(uri, access_token):\n", " \"\"\"\n", " After receiving an Access Token, we can request information from the API.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"application/json\",\n", " }\n", "\n", " # HTTP POST request\n", " req = request.Request(url, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 200, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def get_access_token(client_id, client_secret):\n", " \"\"\"\n", " Get a new access_token. Access tokens are the thing that applications use to make\n", " API requests. Access tokens must be kept confidential in storage.\n", "\n", " # Procedure:\n", "\n", " Do a POST query with `grantType` and `scopes` in the body. A basic authorization\n", " HTTP header is required. The \"Basic\" HTTP authentication scheme is defined in\n", " RFC 7617, which transmits credentials as `clientId:clientSecret` pairs, encoded\n", " using base64.\n", " \"\"\"\n", "\n", " # Note: for the sake of this example, we choose to use the Python urllib from the\n", " # standard lib. One should consider using https://requests.readthedocs.io/\n", "\n", " payload = \"{}:{}\".format(client_id, client_secret).encode()\n", " headers = {\n", " \"Authorization\": b64encode(payload).decode(),\n", " \"Accept\": \"application/json\",\n", " \"Content-Type\": \"application/json\",\n", " }\n", " body = {\n", " \"grantType\": \"clientCredentials\",\n", " \"scopes\": \"read:lng-freight-prices,read:routes\",\n", " }\n", "\n", " content = do_api_post_query(uri=\"/oauth/token/\", body=body, headers=headers)\n", "\n", " print(\n", " \">>>> Successfully fetched an access token {}****, valid {} seconds.\".format(\n", " content[\"accessToken\"][:5], content[\"expiresIn\"]\n", " )\n", " )\n", "\n", " return content[\"accessToken\"]" ] }, { "cell_type": "markdown", "id": "691c889f", "metadata": {}, "source": [ "## Defining Fetch Request\n", "\n", "Here is where we define what type of data we want to fetch from the API.\n", "\n", "In my fetch request, I use the URL:\n", "\n", "__uri=\"/v1.0/contracts/\"__\n", "\n", "This is to query contract price data specifically. Other data products (such as shipping route costs) require different URL's in the fetch request (refer to other Python API examples)." ] }, { "cell_type": "code", "execution_count": 20, "id": "7d5fad75", "metadata": {}, "outputs": [], "source": [ "# Defining function for collecting the list of contracts\n", "def list_contracts(access_token):\n", " \"\"\"\n", " Fetch available contracts. Return contract ticker symbols\n", "\n", " # Procedure:\n", "\n", " Do a GET query to /v1.0/contracts/ with a Bearer token authorization HTTP header.\n", " \"\"\"\n", " content = do_api_get_query(uri=\"/v1.0/contracts/\", access_token=access_token)\n", "\n", " print(\">>>> All the contracts you can fetch\")\n", " tickers = []\n", " for contract in content[\"data\"]:\n", " print(contract[\"fullName\"])\n", " tickers.append(contract[\"id\"])\n", "\n", " return tickers" ] }, { "cell_type": "markdown", "id": "fd3171a8", "metadata": {}, "source": [ "## N.B. Credentials\n", "\n", "Here we call the above functions, and input the file path to our credentials.\n", "\n", "N.B. You must have downloaded your client credentials CSV file before proceeding. Please refer to the API documentation if you have not dowloaded them already. Instructions for downloading your credentials can be found here:\n", "\n", "https://api.sparkcommodities.com/redoc#section/Authentication/Create-an-Oauth2-Client\n", "\n", "\n", "The code then prints the available prices that are callable from the API, and their corresponding Python ticker names are displayed as a list at the bottom of the Output." ] }, { "cell_type": "code", "execution_count": 21, "id": "602d2492", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ">>>> Found credentials!\n", ">>>> Client_id=01c23590-ef6c-4a36-8237-c89c3f1a3b2a, client_secret=80763****\n", ">>>> Successfully fetched an access token eyJhb****, valid 604799 seconds.\n", "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0eXBlIjoiYWNjZXNzVG9rZW4iLCJzdWIiOiIwMWMyMzU5MC1lZjZjLTRhMzYtODIzNy1jODljM2YxYTNiMmEiLCJzdWJUeXBlIjoib2F1dGgtY2xpZW50IiwiZXhwIjoxNzMwNzI2NTYzLCJoYXNoZWRTZWNyZXQiOiJwYmtkZjJfc2hhMjU2JDYwMDAwMCRoTXRMNDlrMUZUaVVzTE42Njlqc2pPJHVCSXNxcml5b1NHVzJTS1AvaHVLNHh3eTZ4d3VDN001aUdGRm43N2l4S1U9Iiwib3JnVXVpZCI6IjQ5MzhiMGJiLTVmMjctNDE2NC04OTM4LTUyNTdmYmQzNTNmZiIsInNjb3BlcyI6WyJyZWFkOnJvdXRlcyIsInJlYWQ6cHJpY2VzIl0sImNsaWVudFR5cGUiOiJvYXV0aC1jbGllbnQifQ.itQucx6vvWjS8YAo3pQTPjO3FMtaT819kGkMW6OLWxw\n", ">>>> All the contracts you can fetch\n", "Spark25F Pacific 160 TFDE\n", "Spark30F Atlantic 160 TFDE\n", "Spark25S Pacific\n", "Spark25Fo Pacific\n", "Spark25FFA Pacific\n", "Spark25FFAYearly Pacific\n", "Spark30S Atlantic\n", "Spark30Fo Atlantic\n", "Spark30FFA Atlantic\n", "Spark30FFAYearly Atlantic\n", "SparkNWE DES 1H\n", "SparkNWE-B 1H\n", "SparkNWE DES 2H\n", "SparkNWE-B 2H\n", "SparkNWE-B F\n", "SparkNWE DES F\n", "SparkNWE-B Fo\n", "SparkNWE DES Fo\n", "SparkNWE-DES-Fin Monthly\n", "SparkNWE-Fin Monthly\n", "SparkSWE-B F\n", "SparkSWE DES F\n", "SparkSWE-B Fo\n", "SparkSWE DES Fo\n", "SparkSWE-DES-Fin Monthly\n", "SparkSWE-Fin Monthly\n" ] } ], "source": [ "# Insert file path to your client credentials here\n", "client_id, client_secret = retrieve_credentials(file_path=\"/tmp/client_credentials.csv\")\n", "\n", "# Authenticate:\n", "access_token = get_access_token(client_id, client_secret)\n", "print(access_token)\n", "\n", "# Fetch all contracts:\n", "tickers = list_contracts(access_token)" ] }, { "cell_type": "markdown", "id": "a0e0e030", "metadata": {}, "source": [ "# Data Import" ] }, { "cell_type": "code", "execution_count": 22, "id": "3302a85f", "metadata": {}, "outputs": [], "source": [ "def fetch_historical_price_releases(access_token, ticker, limit, offset=None):\n", " \"\"\"\n", " For a selected contract, this endpoint returns all the Price Releases you can\n", " access according to your current subscription, ordered by release date descending.\n", "\n", " **Note**: Unlimited access to historical data and full forward curves is only\n", " available to those with Premium access. Get in touch to find out more.\n", "\n", " **Params**\n", "\n", " limit: optional integer value to set an upper limit on the number of price\n", " releases returned by the endpoint.\n", "\n", " offset: optional integer value to set from where to start returning data.\n", " Default is 0.\n", "\n", " # Procedure:\n", "\n", " Do GET queries to /v1.0/contracts/{contract_ticker_symbol}/price-releases/\n", " with a Bearer token authorization HTTP header.\n", " \"\"\"\n", " print(\">>>> Get price releases for {}\".format(ticker))\n", "\n", " query_params = \"?limit={}\".format(limit)\n", " if offset is not None:\n", " query_params += \"&offset={}\".format(offset)\n", "\n", " content = do_api_get_query(\n", " uri=\"/v1.0/contracts/{}/price-releases/{}\".format(ticker, query_params),\n", " access_token=access_token,\n", " )\n", "\n", " my_dict = content[\"data\"]\n", "\n", " return my_dict\n" ] }, { "cell_type": "markdown", "id": "99be9416", "metadata": {}, "source": [ "### Function to call data and store as a DataFrame" ] }, { "cell_type": "code", "execution_count": 23, "id": "32d6eb83", "metadata": {}, "outputs": [], "source": [ "# Defining the function\n", "def fetch_ffa_prices(my_tick, my_lim):\n", " print(my_tick)\n", "\n", " my_dict_hist = fetch_historical_price_releases(access_token, my_tick, limit=my_lim)\n", "\n", " release_dates = []\n", "\n", " period_start = []\n", " period_end = []\n", " period_name = []\n", " cal_month = []\n", " ticker = []\n", " usd_day = []\n", " day_min = []\n", " day_max = []\n", " \n", " calmonth = []\n", "\n", " for release in my_dict_hist:\n", " release_date = release[\"releaseDate\"]\n", " data = release[\"data\"]\n", "\n", " for d in data:\n", " data_points = d[\"dataPoints\"]\n", " for data_point in data_points:\n", " period_start_at = data_point[\"deliveryPeriod\"][\"startAt\"]\n", " period_start.append(period_start_at)\n", " period_end_at = data_point[\"deliveryPeriod\"][\"endAt\"]\n", " period_end.append(period_end_at)\n", " period_name.append(data_point[\"deliveryPeriod\"][\"name\"])\n", "\n", " release_dates.append(release_date) \n", " ticker.append(release[\"contractId\"])\n", " cal_month.append(\n", " datetime.strptime(period_start_at, \"%Y-%m-%d\").strftime(\"%b-%Y\")\n", " )\n", "\n", " usd_day.append(int(data_point[\"derivedPrices\"][\"usdPerDay\"][\"spark\"]))\n", " day_min.append(\n", " int(data_point[\"derivedPrices\"][\"usdPerDay\"][\"sparkMin\"])\n", " )\n", " day_max.append(\n", " int(data_point[\"derivedPrices\"][\"usdPerDay\"][\"sparkMax\"])\n", " )\n", " \n", "\n", "\n", " historical_df = pd.DataFrame(\n", " {\n", " \"Release Date\": release_dates,\n", " \"ticker\": ticker,\n", " \"Period Name\": period_name,\n", " \"Period Start\": period_start,\n", " \"Period End\": period_end,\n", " \"Calendar Month\": cal_month,\n", " \"Spark\": usd_day,\n", " \"SparkMin\": day_min,\n", " \"SparkMax\": day_max,\n", " }\n", " )\n", "\n", " historical_df['Release Date'] = pd.to_datetime(historical_df['Release Date'],format='%Y-%m-%d')\n", " \n", " historical_df['Period Start'] = pd.to_datetime(historical_df['Period Start'])\n", " historical_df['Period End'] = pd.to_datetime(historical_df['Period End'])\n", " historical_df['Time Diff'] = (historical_df['Release Date'] - historical_df['Period Start']).dt.days\n", "\n", " return historical_df" ] }, { "cell_type": "markdown", "id": "7135318a", "metadata": {}, "source": [ "### Call those functions for Spark30FFA and Spark25FFA\n", "\n", "We call the function defined above and create two dataframes:\n", "\n", "- spark25ffa - storing all historical Spark25FFA data\n", "- spark30ffa - storing all historical Spark30FFA data" ] }, { "cell_type": "code", "execution_count": 24, "id": "b367acfe", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "spark25ffa-monthly\n", ">>>> Get price releases for spark25ffa-monthly\n", "spark30ffa-monthly\n", ">>>> Get price releases for spark30ffa-monthly\n" ] } ], "source": [ "spark25ffa = fetch_ffa_prices(tickers[4], 900)\n", "\n", "spark30ffa = fetch_ffa_prices(tickers[8], 900)" ] }, { "cell_type": "markdown", "id": "9126aa2b", "metadata": {}, "source": [ "# FFA Contract Evolution + Seasonality\n", "\n", "Compare Spark30FFA December contracts from the past 3 years, and track how these contracts have priced in the year leading up to contract settlement" ] }, { "cell_type": "markdown", "id": "3676a9e1", "metadata": {}, "source": [ "## Sorting Data\n", "\n", "We then create new columns to categorise the data we have. \n", "- By creating the 'Years', 'Quarters' and 'Seasons' columns, we can filter the data accordingly.\n", "- By creating the 'Day of Year' column, we can plot the yearly data on the same x-axis range." ] }, { "cell_type": "code", "execution_count": 25, "id": "e698cbfd", "metadata": {}, "outputs": [], "source": [ "def sort_years(df):\n", " \n", " reldates = df['Release Date'].to_list()\n", " startdates = df['Period Start'].to_list()\n", "\n", " dayofyear = []\n", " \n", " for r in reldates:\n", " ir = reldates.index(r)\n", " if r.year - startdates[ir].year == -1:\n", " dayofyear.append(r.timetuple().tm_yday - 365)\n", " elif r.year - startdates[ir].year == -2:\n", " dayofyear.append(r.timetuple().tm_yday - 730)\n", " else:\n", " dayofyear.append(r.timetuple().tm_yday)\n", " \n", " df['Day of Year'] = dayofyear\n", "\n", " return df" ] }, { "cell_type": "code", "execution_count": 26, "id": "8ba60d6f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "10" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "spark30ffa['Release Date'].iloc[0].month" ] }, { "cell_type": "markdown", "id": "bbd5a39b", "metadata": {}, "source": [ "## Group Dataframe by CalMonth, Quarter or Season and choose contracts to compare" ] }, { "cell_type": "code", "execution_count": 27, "id": "73853999", "metadata": {}, "outputs": [], "source": [ "groups = spark30ffa.groupby('Calendar Month')" ] }, { "cell_type": "code", "execution_count": 28, "id": "27f1f2b0", "metadata": {}, "outputs": [], "source": [ "dec24 = groups.get_group('Dec-2024')\n", "dec23 = groups.get_group('Dec-2023')\n", "dec22 = groups.get_group('Dec-2022')\n", "\n", "dec24 = sort_years(dec24)\n", "dec23 = sort_years(dec23)\n", "dec22 = sort_years(dec22)" ] }, { "cell_type": "code", "execution_count": 29, "id": "eff578db", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | Release Date | \n", "ticker | \n", "Period Name | \n", "Period Start | \n", "Period End | \n", "Calendar Month | \n", "Spark | \n", "SparkMin | \n", "SparkMax | \n", "Time Diff | \n", "Day of Year | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|
4099 | \n", "2023-12-28 | \n", "spark30ffa-monthly | \n", "M+0 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "140500 | \n", "127250 | \n", "150000 | \n", "27 | \n", "362 | \n", "
4124 | \n", "2023-12-27 | \n", "spark30ffa-monthly | \n", "M+0 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "141000 | \n", "127500 | \n", "150000 | \n", "26 | \n", "361 | \n", "
4149 | \n", "2023-12-22 | \n", "spark30ffa-monthly | \n", "M+0 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "142000 | \n", "128000 | \n", "150000 | \n", "21 | \n", "356 | \n", "
4174 | \n", "2023-12-21 | \n", "spark30ffa-monthly | \n", "M+0 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "142500 | \n", "130000 | \n", "150000 | \n", "20 | \n", "355 | \n", "
4199 | \n", "2023-12-20 | \n", "spark30ffa-monthly | \n", "M+0 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "143500 | \n", "133000 | \n", "150000 | \n", "19 | \n", "354 | \n", "
... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "
15017 | \n", "2021-10-08 | \n", "spark30ffa-monthly | \n", "M+26 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "112000 | \n", "90000 | \n", "140000 | \n", "-784 | \n", "-449 | \n", "
15044 | \n", "2021-10-07 | \n", "spark30ffa-monthly | \n", "M+26 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "110500 | \n", "90000 | \n", "140000 | \n", "-785 | \n", "-450 | \n", "
15071 | \n", "2021-10-06 | \n", "spark30ffa-monthly | \n", "M+26 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "110500 | \n", "90000 | \n", "140000 | \n", "-786 | \n", "-451 | \n", "
15098 | \n", "2021-10-05 | \n", "spark30ffa-monthly | \n", "M+26 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "110000 | \n", "90000 | \n", "140000 | \n", "-787 | \n", "-452 | \n", "
15125 | \n", "2021-10-04 | \n", "spark30ffa-monthly | \n", "M+26 | \n", "2023-12-01 | \n", "2023-12-31 | \n", "Dec-2023 | \n", "110000 | \n", "90000 | \n", "140000 | \n", "-788 | \n", "-453 | \n", "
561 rows × 11 columns
\n", "