{ "cells": [ { "cell_type": "markdown", "id": "25c8380d", "metadata": {}, "source": [ "# Python API Example - Price Release Data Import and Storage\n", "## Importing Price Data into a Pandas DataFrame and Plotting\n", "\n", "Here we import price release data from the Spark Python API. \n", "\n", "We then store them as local variables so that they can be used for analysis.\n", "\n", "This guide is designed to provide an example of how to access the Spark API:\n", "- The path to your client credentials is the only input needed to run this script (just before Section 2)\n", "- This script has been designed to display the raw outputs of requests from the API, and then shows you how to format those outputs to enable easy reading and analysis\n", "- This script can be copied and pasted by customers for quick use of the API\n", "- Once comfortable with the process, you can change the variables that are called to produce your own custom analysis products. (Section 2 onwards in this guide).\n", " \n", "__N.B. This guide is just for Price release data. If you're looking for other API data products (such as Freight routes or Netbacks), please refer to their according code example files.__ " ] }, { "cell_type": "markdown", "id": "c6941c5b", "metadata": {}, "source": [ "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__\n", "\n", "or refer to our API website for more information about this endpoint: https://www.sparkcommodities.com/api/request/contracts.html" ] }, { "cell_type": "markdown", "id": "11fecfbe", "metadata": {}, "source": [ "## 1. Importing Data\n", "\n", "Here we define the functions that allow us to retrieve the valid credentials to access the Spark API.\n", "\n", "This section can remain unchanged for most Spark API users." ] }, { "cell_type": "code", "execution_count": 1, "id": "cc046bc4", "metadata": {}, "outputs": [], "source": [ "# Importing libraries for calling the API\n", "import json\n", "import os\n", "import sys\n", "from base64 import b64encode\n", "from urllib.parse import urljoin\n", "import pandas as pd\n", "\n", "\n", "try:\n", " from urllib import request, parse\n", " from urllib.error import HTTPError\n", "except ImportError:\n", " raise RuntimeError(\"Python 3 required\")" ] }, { "cell_type": "code", "execution_count": 2, "id": "6fe087d8", "metadata": {}, "outputs": [], "source": [ "# Defining functions for API request\n", "\n", "API_BASE_URL = \"https://api.sparkcommodities.com\"\n", "\n", "\n", "def retrieve_credentials(file_path=None):\n", " \"\"\"\n", " Find credentials either by reading the client_credentials file or reading\n", " environment variables\n", " \"\"\"\n", " if file_path is None:\n", " client_id = os.getenv(\"SPARK_CLIENT_ID\")\n", " client_secret = os.getenv(\"SPARK_CLIENT_SECRET\")\n", " if not client_id or not client_secret:\n", " raise RuntimeError(\n", " \"SPARK_CLIENT_ID and SPARK_CLIENT_SECRET environment vars required\"\n", " )\n", " else:\n", " # Parse the file\n", " if not os.path.isfile(file_path):\n", " raise RuntimeError(\"The file {} doesn't exist\".format(file_path))\n", "\n", " with open(file_path) as fp:\n", " lines = [l.replace(\"\\n\", \"\") for l in fp.readlines()]\n", "\n", " if lines[0] in (\"clientId,clientSecret\", \"client_id,client_secret\"):\n", " client_id, client_secret = lines[1].split(\",\")\n", " else:\n", " print(\"First line read: '{}'\".format(lines[0]))\n", " raise RuntimeError(\n", " \"The specified file {} doesn't look like to be a Spark API client \"\n", " \"credentials file\".format(file_path)\n", " )\n", "\n", " print(\">>>> Found credentials!\")\n", " print(\n", " \">>>> Client_id={}, client_secret={}****\".format(client_id, client_secret[:5])\n", " )\n", "\n", " return client_id, client_secret\n", "\n", "\n", "def do_api_post_query(uri, body, headers):\n", " \"\"\"\n", " OAuth2 authentication requires a POST request with client credentials before accessing the API. \n", " This POST request will return an Access Token which will be used for the API GET request.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " data = json.dumps(body).encode(\"utf-8\")\n", "\n", " # HTTP POST request\n", " req = request.Request(url, data=data, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 201, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def do_api_get_query(uri, access_token):\n", " \"\"\"\n", " After receiving an Access Token, we can request information from the API.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"application/json\",\n", " }\n", "\n", " # HTTP POST request\n", " req = request.Request(url, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 200, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def get_access_token(client_id, client_secret):\n", " \"\"\"\n", " Get a new access_token. Access tokens are the thing that applications use to make\n", " API requests. Access tokens must be kept confidential in storage.\n", "\n", " # Procedure:\n", "\n", " Do a POST query with `grantType` and `scopes` in the body. A basic authorization\n", " HTTP header is required. The \"Basic\" HTTP authentication scheme is defined in\n", " RFC 7617, which transmits credentials as `clientId:clientSecret` pairs, encoded\n", " using base64.\n", " \"\"\"\n", "\n", " # Note: for the sake of this example, we choose to use the Python urllib from the\n", " # standard lib. One should consider using https://requests.readthedocs.io/\n", "\n", " payload = \"{}:{}\".format(client_id, client_secret).encode()\n", " headers = {\n", " \"Authorization\": b64encode(payload).decode(),\n", " \"Accept\": \"application/json\",\n", " \"Content-Type\": \"application/json\",\n", " }\n", " body = {\n", " \"grantType\": \"clientCredentials\",\n", " \"scopes\": \"read:prices,read:routes\",\n", " }\n", "\n", " content = do_api_post_query(uri=\"/oauth/token/\", body=body, headers=headers)\n", "\n", " print(\n", " \">>>> Successfully fetched an access token {}****, valid {} seconds.\".format(\n", " content[\"accessToken\"][:5], content[\"expiresIn\"]\n", " )\n", " )\n", "\n", " return content[\"accessToken\"]" ] }, { "cell_type": "markdown", "id": "691c889f", "metadata": {}, "source": [ "## Defining Fetch Request\n", "\n", "Here is where we define what type of data we want to fetch from the API.\n", "\n", "In my fetch request, I use the URL:\n", "\n", "__uri=\"/v1.0/contracts/\"__\n", "\n", "This is to query contract price data specifically. Other data products (such as shipping route costs) require different URL's in the fetch request (refer to other Python API examples)." ] }, { "cell_type": "code", "execution_count": 3, "id": "5e341bdf", "metadata": {}, "outputs": [], "source": [ "# Define function for listing contracts from API\n", "def list_contracts(access_token):\n", " \"\"\"\n", " Fetch available contracts. Return contract ticker symbols\n", "\n", " # Procedure:\n", "\n", " Do a GET query to /v1.0/contracts/ with a Bearer token authorization HTTP header.\n", " \"\"\"\n", " content = do_api_get_query(uri=\"/v1.0/contracts/\", access_token=access_token)\n", "\n", " print(\">>>> All the contracts you can fetch\")\n", " tickers = []\n", " for contract in content[\"data\"]:\n", " print(contract[\"fullName\"])\n", " tickers.append(contract[\"id\"])\n", "\n", " return tickers" ] }, { "cell_type": "markdown", "id": "fd3171a8", "metadata": {}, "source": [ "## N.B. Credentials\n", "\n", "Here we call the above functions, and input the file path to our credentials.\n", "\n", "N.B. You must have downloaded your client credentials CSV file before proceeding. Please refer to the API documentation if you have not dowloaded them already. Instructions for downloading your credentials can be found here:\n", "\n", "https://api.sparkcommodities.com/redoc#section/Authentication/Create-an-Oauth2-Client\n", "\n", "\n", "The code then prints the available prices that are callable from the API, and their corresponding Python ticker names are displayed as a list at the bottom of the Output." ] }, { "cell_type": "code", "execution_count": 4, "id": "fd7e89bf", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ">>>> Found credentials!\n", ">>>> Client_id=875f483b-19de-421a-8e9b-dceff6703e83, client_secret=6cdf8****\n", ">>>> Successfully fetched an access token eyJhb****, valid 604799 seconds.\n", ">>>> All the contracts you can fetch\n", "Spark25F Pacific 160 TFDE\n", "Spark30F Atlantic 160 TFDE\n", "Spark25S Pacific\n", "Spark25Fo Pacific\n", "Spark25FFA Pacific\n", "Spark25FFAYearly Pacific\n", "Spark30S Atlantic\n", "Spark30Fo Atlantic\n", "Spark30FFA Atlantic\n", "Spark30FFAYearly Atlantic\n", "SparkNWE DES 1H\n", "SparkNWE-B 1H\n", "SparkNWE DES 2H\n", "SparkNWE-B 2H\n", "SparkNWE-B F\n", "SparkNWE DES F\n", "SparkNWE-B Fo\n", "SparkNWE DES Fo\n", "SparkNWE-DES-Fin Monthly\n", "SparkNWE-Fin Monthly\n", "SparkSWE-B F\n", "SparkSWE DES F\n", "SparkSWE-B Fo\n", "SparkSWE DES Fo\n", "SparkSWE-DES-Fin Monthly\n", "SparkSWE-Fin Monthly\n", "['spark25f', 'spark30f', 'spark25s', 'spark25fo', 'spark25ffa-monthly', 'spark25ffa-yearly', 'spark30s', 'spark30fo', 'spark30ffa-monthly', 'spark30ffa-yearly', 'sparknwe-1h', 'sparknwe-b-1h', 'sparknwe-2h', 'sparknwe-b-2h', 'sparknwe-b-f', 'sparknwe-f', 'sparknwe-b-fo', 'sparknwe-fo', 'sparknwe-des-fin-monthly', 'sparknwe-fin-monthly', 'sparkswe-b-f', 'sparkswe-f', 'sparkswe-b-fo', 'sparkswe-fo', 'sparkswe-des-fin-monthly', 'sparkswe-fin-monthly']\n" ] } ], "source": [ "# Insert file path to your client credentials here\n", "client_id, client_secret = retrieve_credentials(file_path=\"/tmp/client_credentials.csv\")\n", "\n", "# Authenticate:\n", "access_token = get_access_token(client_id, client_secret)\n", "\n", "# Fetch all contracts:\n", "tickers = list_contracts(access_token)\n", "\n", "\n", "print(tickers)" ] }, { "cell_type": "markdown", "id": "fc9cf152", "metadata": {}, "source": [ "## 2. Latest Price Release\n", "\n", "Here we call the latest price release and print it in a readable format. This is done using the URL:\n", "\n", "__/v1.0/contracts/{contract_ticker_symbol}/price-releases/latest/__\n", "\n", "'tickers[2]' is the Python ticker called here. 'tickers' refers to the printed list above, so we can see that 'tickers[2]' refers to 'spark25s'.\n", "\n", "We then save the entire dataset as a local variable called 'my_dict'.\n", "\n", "__N.B. The first two tickers, 'spark25f' and 'spark30f', are deprecated. Historical data for these tickers are available up until 2022-04-01 (yyyy-mm-dd)__\n", "\n", "For more information on API updates, please refer to the API documentation:\n", "\n", "https://api.sparkcommodities.com/redoc#section/API-Changelog" ] }, { "cell_type": "code", "execution_count": 5, "id": "d026eb33", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ">>>> Get latest price release for spark25s\n", "release date = 2024-10-03\n", "Spark Price={'usdPerDay': '61000', 'usdPerMMBtu': '0.77'} for period starting on 2024-10-18\n", "spark25s\n" ] } ], "source": [ "## Defining the function\n", "\n", "\n", "def fetch_latest_price_releases(access_token, ticker):\n", " \"\"\"\n", " For a contract, fetch then display the latest price release\n", "\n", " # Procedure:\n", "\n", " Do GET queries to /v1.0/contracts/{contract_ticker_symbol}/price-releases/latest/\n", " with a Bearer token authorization HTTP header.\n", " \"\"\"\n", " content = do_api_get_query(\n", " uri=\"/v1.0/contracts/{}/price-releases/latest/\".format(ticker),\n", " access_token=access_token,\n", " )\n", "\n", " release_date = content[\"data\"][\"releaseDate\"]\n", "\n", " print(\">>>> Get latest price release for {}\".format(ticker))\n", " print(\"release date =\", release_date)\n", "\n", " data_points = content[\"data\"][\"data\"][0][\"dataPoints\"]\n", "\n", " for data_point in data_points:\n", " period_start_at = data_point[\"deliveryPeriod\"][\"startAt\"]\n", "\n", " spark_prices = dict()\n", " for unit, prices in data_point[\"derivedPrices\"].items():\n", " spark_prices[unit] = prices[\"spark\"]\n", "\n", " print(f\"Spark Price={spark_prices} for period starting on {period_start_at}\")\n", " print(ticker)\n", "\n", " return content[\"data\"]\n", "\n", "\n", "## Calling that function and storing the output\n", "\n", "# Here we store the entire dataset called from the API\n", "\n", "my_dict = fetch_latest_price_releases(access_token, tickers[2])" ] }, { "cell_type": "code", "execution_count": 6, "id": "04e61ee1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'id': 20241003, 'contractId': 'spark25s', 'releaseDate': '2024-10-03', 'previousPriceRelease': {'id': 20241002, 'releaseDate': '2024-10-02'}, 'nextPriceRelease': {'id': 20241004, 'releaseDate': '2024-10-04'}, 'assessmentWindowClosedAt': '2024-10-03T10:00:00Z', 'assessmentWindowOpenedAt': '2024-10-03T08:00:00Z', 'data': [{'revisionNumber': 0, 'revisionPublishedAt': '2024-10-03T10:06:15.614660Z', 'numberOfAssessors': None, 'dataPoints': [{'index': 0, 'deliveryPeriod': {'type': 'days', 'startAt': '2024-10-18', 'endAt': '2024-11-17', 'name': 'SparkS', 'lastAssessmentDate': None}, 'yourAssessedPrice': None, 'derivedPrices': {'usdPerDay': {'spark': '61000', 'sparkMin': '50000', 'sparkMax': '65000', 'portfolioPlayer': None, 'portfolioPlayerMin': None, 'portfolioPlayerMax': None, 'shipOwner': None, 'shipOwnerMin': None, 'shipOwnerMax': None}, 'usdPerMMBtu': {'spark': '0.77', 'sparkMin': '0.70', 'sparkMax': '0.79', 'portfolioPlayer': None, 'portfolioPlayerMin': None, 'portfolioPlayerMax': None, 'shipOwner': None, 'shipOwnerMin': None, 'shipOwnerMax': None}}, 'meta': [{'type': 'freight-vessel-type', 'value': '174-2stroke'}]}], 'aggregatedData': None}], 'publishedAt': '2024-10-03T10:06:15.614660Z', 'meta': []}\n" ] } ], "source": [ "# Shows how the raw output is formatted\n", "print(my_dict)" ] }, { "cell_type": "markdown", "id": "1f815b30", "metadata": {}, "source": [ "### N.B.\n", "\n", "Here we extract the prices and not the entire dataset, and this is saved as a dictionary called 'spark_prices'." ] }, { "cell_type": "code", "execution_count": 7, "id": "bbf433fa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'usdPerDay': '61000', 'usdPerMMBtu': '0.77'}\n" ] } ], "source": [ "# extract the prices\n", "data_points = my_dict[\"data\"][0][\"dataPoints\"]\n", "\n", "for data_point in data_points:\n", " period_start_at = data_point[\"deliveryPeriod\"][\"startAt\"]\n", "\n", " spark_prices = dict()\n", " for unit, prices in data_point[\"derivedPrices\"].items():\n", " spark_prices[unit] = prices[\"spark\"]\n", "\n", " print(spark_prices)" ] }, { "cell_type": "markdown", "id": "a0e0e030", "metadata": {}, "source": [ "## 3. Historical Prices\n", "\n", "Here we perform a similar task, but with historical prices instead. This is done using the URL:\n", "\n", "__/v1.0/contracts/{contract_ticker_symbol}/price-releases/{limit}{offset}__\n", "\n", "First we define the function that imports the data from the Spark API.\n", "\n", "We then call that function, and define 2 parameters:\n", "- 'tickers': which ticker do you want to call.\n", " - We define the variable 'my_ticker' after the function definition, and set this to 'tickers[2]' which corresponds to Spark25s\n", " - Alter this variable to whatever price product you need.\n", "- 'limit': this allows you to control how many datapoints you want to call. Here we use 'limit=10', which means we have called the last 10 datapoints (the Spark25 spot price for the last 10 business days).\n", " - Alter this limit to however many datapoints you need.\n", "\n", "\n", "We save the output as a local variable called 'my_dict_hist'" ] }, { "cell_type": "code", "execution_count": 8, "id": "ff4d0dcc", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ">>>> Get price releases for spark25s\n", "- release date = 2024-10-03\n", "Spark Price={'usdPerDay': '61000', 'usdPerMMBtu': '0.77'} for period starting on 2024-10-18\n", "- release date = 2024-10-02\n", "Spark Price={'usdPerDay': '62000', 'usdPerMMBtu': '0.78'} for period starting on 2024-10-17\n", "- release date = 2024-10-01\n", "Spark Price={'usdPerDay': '63000', 'usdPerMMBtu': '0.79'} for period starting on 2024-10-16\n", "- release date = 2024-09-30\n", "Spark Price={'usdPerDay': '63750', 'usdPerMMBtu': '0.79'} for period starting on 2024-10-15\n", "- release date = 2024-09-27\n", "Spark Price={'usdPerDay': '64000', 'usdPerMMBtu': '0.80'} for period starting on 2024-10-12\n", "- release date = 2024-09-26\n", "Spark Price={'usdPerDay': '66250', 'usdPerMMBtu': '0.81'} for period starting on 2024-10-11\n", "- release date = 2024-09-25\n", "Spark Price={'usdPerDay': '69000', 'usdPerMMBtu': '0.82'} for period starting on 2024-10-10\n", "- release date = 2024-09-24\n", "Spark Price={'usdPerDay': '70250', 'usdPerMMBtu': '0.83'} for period starting on 2024-10-09\n", "- release date = 2024-09-23\n", "Spark Price={'usdPerDay': '72750', 'usdPerMMBtu': '0.84'} for period starting on 2024-10-08\n", "- release date = 2024-09-20\n", "Spark Price={'usdPerDay': '73000', 'usdPerMMBtu': '0.84'} for period starting on 2024-10-05\n" ] } ], "source": [ "def fetch_historical_price_releases(access_token, ticker, limit=4, offset=None):\n", " \"\"\"\n", " For a selected contract, this endpoint returns all the Price Releases you can\n", " access according to your current subscription, ordered by release date descending.\n", "\n", " **Note**: Unlimited access to historical data and full forward curves is only\n", " available to those with Premium access. Get in touch to find out more.\n", "\n", " **Params**\n", "\n", " limit: optional integer value to set an upper limit on the number of price\n", " releases returned by the endpoint. Default here is 4.\n", "\n", " offset: optional integer value to set from where to start returning data.\n", " Default is 0.\n", "\n", " # Procedure:\n", "\n", " Do GET queries to /v1.0/contracts/{contract_ticker_symbol}/price-releases/\n", " with a Bearer token authorization HTTP header.\n", " \"\"\"\n", " print(\">>>> Get price releases for {}\".format(ticker))\n", "\n", " query_params = \"?limit={}\".format(limit)\n", " if offset is not None:\n", " query_params += \"&offset={}\".format(offset)\n", "\n", " content = do_api_get_query(\n", " uri=\"/v1.0/contracts/{}/price-releases/{}\".format(ticker, query_params),\n", " access_token=access_token,\n", " )\n", "\n", " my_dict = content[\"data\"]\n", "\n", " for release in content[\"data\"]:\n", " release_date = release[\"releaseDate\"]\n", "\n", " print(\"- release date =\", release_date)\n", "\n", " data_points = release[\"data\"][0][\"dataPoints\"]\n", "\n", " for data_point in data_points:\n", " period_start_at = data_point[\"deliveryPeriod\"][\"startAt\"]\n", "\n", " spark_prices = dict()\n", " for unit, prices in data_point[\"derivedPrices\"].items():\n", " spark_prices[unit] = prices[\"spark\"]\n", "\n", " print(\n", " f\"Spark Price={spark_prices} for period starting on {period_start_at}\"\n", " )\n", "\n", " return my_dict\n", "\n", "\n", "### Define which price product you want to retrieve\n", "my_ticker = tickers[2]\n", "\n", "my_dict_hist = fetch_historical_price_releases(access_token, my_ticker, limit=10)" ] }, { "cell_type": "markdown", "id": "99be9416", "metadata": {}, "source": [ "### Formatting into a Pandas DataFrame\n", "\n", "The outputted data has several nested lists and dictionaries. If we are aware of what variables we want, we can externally store these values as lists and create a Pandas DataFrame.\n", "\n", "Within a new dictionary, we create empty lists for variables:\n", "- Release Dates\n", "- Start of Period\n", "- Ticker\n", "- Price in dollars/day\n", "- Price in dollars/MMBtu\n", "- The spread of the data used to calculate the Spot Price\n", " - Min\n", " - Max\n", "\n", "The dictionary is then transformed into a Pandas Dataframe for readability and ease of use. " ] }, { "cell_type": "markdown", "id": "00089782", "metadata": {}, "source": [ "## N.B. \n", "This JSON structure is not consistent across all datasets, and so might need to be amended when calling other Spark contracts." ] }, { "cell_type": "code", "execution_count": 11, "id": "56aa19be", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
tickerPeriod StartUSDperdayUSDperdayMaxUSDperdayMinUSDperMMBtuRelease Date
0spark25s2024-10-186100065000500000.772024-10-03
1spark25s2024-10-176200065000530000.782024-10-02
2spark25s2024-10-166300065000570000.792024-10-01
3spark25s2024-10-156375065000580000.792024-09-30
4spark25s2024-10-126400067500580000.802024-09-27
\n", "
" ], "text/plain": [ " ticker Period Start USDperday USDperdayMax USDperdayMin USDperMMBtu \\\n", "0 spark25s 2024-10-18 61000 65000 50000 0.77 \n", "1 spark25s 2024-10-17 62000 65000 53000 0.78 \n", "2 spark25s 2024-10-16 63000 65000 57000 0.79 \n", "3 spark25s 2024-10-15 63750 65000 58000 0.79 \n", "4 spark25s 2024-10-12 64000 67500 58000 0.80 \n", "\n", " Release Date \n", "0 2024-10-03 \n", "1 2024-10-02 \n", "2 2024-10-01 \n", "3 2024-09-30 \n", "4 2024-09-27 " ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Running the function to store the values\n", "historical_df = store_and_format(my_dict_hist)\n", "historical_df.head()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 5 }