Skip to main content

Documentation Index

Fetch the complete documentation index at: https://samsara-showcase.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This recipe calls the daily HOS logs endpoint and writes rows for drivers who claimed adverse driving, big day, or short haul exemptions.
import csv
import datetime
import os

import dateutil.tz
import requests

token = os.environ["SAMSARA_API_TOKEN"]
num_days = 0

with open("exemption_report.csv", "w", newline="") as csv_file:
    fieldnames = [
        "Date",
        "Driver Name",
        "Driver ID",
        "Adverse Driving Exemption Claimed",
        "Big Day Exemption Claimed",
        "Short Haul Active",
    ]
    csv_dict_writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
    csv_dict_writer.writeheader()

    pagination = {"hasNextPage": True, "endCursor": ""}
    start_date = (datetime.date.today() - datetime.timedelta(days=num_days)).isoformat()
    end_date = datetime.date.today().isoformat()

    while pagination["hasNextPage"]:
        response = requests.request(
            "GET",
            "https://api.samsara.com/fleet/hos/daily-logs",
            params={
                "startDate": start_date,
                "endDate": end_date,
                "after": pagination["endCursor"],
            },
            headers={"Authorization": "Bearer " + token},
        ).json()

        pagination = response["pagination"]

        for log in response["data"]:
            metadata = log["logMetaData"]
            has_exemption = (
                metadata["adverseDrivingClaimed"]
                or metadata["bigDayClaimed"]
                or metadata["isUsShortHaulActive"]
            )

            if not has_exemption:
                continue

            row = {}
            row["Date"] = datetime.datetime.fromisoformat(
                log["startTime"].replace("Z", "+00:00")
            ).astimezone(dateutil.tz.gettz(log["driver"]["timezone"])).date()
            row["Driver Name"] = log["driver"]["name"]
            row["Driver ID"] = log["driver"]["id"]
            row["Adverse Driving Exemption Claimed"] = metadata["adverseDrivingClaimed"]
            row["Big Day Exemption Claimed"] = metadata["bigDayClaimed"]
            row["Short Haul Active"] = metadata["isUsShortHaulActive"]

            csv_dict_writer.writerow(row)

How it works

1

Import required packages

The sample uses Python 3.6 or newer. Install python-dateutil if it is not already available.
2

Set up configuration

Configure the API token and number of days to include in the report.
3

Create the CSV report

The report includes the HOS day, driver, driver ID, and any exemptions claimed for that day.
4

Set query parameters

The script initializes pagination, then sets startDate and endDate in YYYY-MM-DD format.
5

Get daily HOS logs

Call GET /fleet/hos/daily-logs with startDate, endDate, and after.
6

Paginate through logs

Continue requesting pages until pagination.hasNextPage is false.
7

Process each daily log

If logMetaData contains an exemption, write the relevant fields to the CSV file.
See Daily Duty Status Summaries for more details.