HarvestAPI
Connect to HarvestAPI to log time entries in Harvest and scrape LinkedIn profiles, companies, and job listings for recruiting, sales, and market research.
Connect to HarvestAPI to log and retrieve time entries in Harvest, and scrape LinkedIn profiles, companies, and job listings for recruiting, sales prospecting, and market research.
Supports authentication: API Key
Set up the agent connector
Section titled “Set up the agent connector”Register your HarvestAPI key with Scalekit so it can authenticate LinkedIn data requests on your behalf. You’ll need an API key from your HarvestAPI dashboard.
-
Generate an API key
-
Sign in to your HarvestAPI dashboard.
-
Click Create API key, give it a descriptive name (e.g.,
My Agent), and click Create. -
Copy the generated API key. It is shown only once — store it securely before navigating away.

-
-
Create a connection in Scalekit
In Scalekit dashboard, go to Agent Auth → Create Connection. Find HarvestAPI and click Create.

-
Add a connected account
Open the connection you just created and click the Connected Accounts tab → Add account. Fill in the required fields:
- Your User’s ID — a unique identifier for the user in your system
- API Key — the key you copied in step 1

Click Save.
Once a connected account is set up, call LinkedIn data tools on behalf of any user — Scalekit injects the stored API key into every request automatically.
import scalekit.client, osfrom dotenv import load_dotenvload_dotenv()
connection_name = "harvestapi" # connection name from Scalekit dashboardidentifier = "user_123" # must match the identifier used when adding the connected account
# Get credentials from app.scalekit.com → Developers → API Credentialsscalekit_client = scalekit.client.ScalekitClient( client_id=os.getenv("SCALEKIT_CLIENT_ID"), client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"), env_url=os.getenv("SCALEKIT_ENV_URL"),)
# Scrape a LinkedIn profile by URLprofile = scalekit_client.actions.request( connection_name=connection_name, identifier=identifier, path="/profile", method="GET", params={"url": "https://www.linkedin.com/in/satyanadella"})print(profile)
# Search LinkedIn for people by title and locationpeople = scalekit_client.actions.request( connection_name=connection_name, identifier=identifier, path="/search/people", method="GET", params={"title": "VP of Engineering", "location": "San Francisco, CA"})print(people)
# Scrape a LinkedIn company pagecompany = scalekit_client.actions.request( connection_name=connection_name, identifier=identifier, path="/company", method="GET", params={"url": "https://www.linkedin.com/company/openai"})print(company)
# Search LinkedIn job listings by keyword and locationjobs = scalekit_client.actions.request( connection_name=connection_name, identifier=identifier, path="/search/jobs", method="GET", params={"keywords": "machine learning engineer", "location": "New York, NY"})print(jobs)
# Scrape a single job listing by URLjob = scalekit_client.actions.request( connection_name=connection_name, identifier=identifier, path="/job", method="GET", params={"url": "https://www.linkedin.com/jobs/view/1234567890"})print(job)
# Bulk scrape multiple LinkedIn profiles in one requestbulk = scalekit_client.actions.request( connection_name=connection_name, identifier=identifier, path="/profiles/bulk", method="POST", json={ "urls": [ "https://www.linkedin.com/in/satyanadella", "https://www.linkedin.com/in/jeffweiner08", "https://www.linkedin.com/in/reidhoffman" ] })print(bulk)Tool list
Section titled “Tool list”log_time_entry
Section titled “log_time_entry”Create a new time entry in Harvest for a specific project and task. Supports both duration-based (hours) and start/end time-based logging. Returns the created time entry with its ID, billable status, and invoice details.
| Name | Type | Required | Description |
|---|---|---|---|
project_id | integer | Yes | Harvest project ID to log time against |
task_id | integer | Yes | Harvest task ID within the project |
spent_date | string | Yes | Date of the time entry in YYYY-MM-DD format |
hours | number | No | Duration in decimal hours (e.g., 1.5 for 90 minutes). Use this or started_time/ended_time. |
started_time | string | No | Start time for timer-based entry (e.g., 8:00am). Requires ended_time. |
ended_time | string | No | End time for timer-based entry (e.g., 9:30am). Requires started_time. |
notes | string | No | Notes or description for the time entry |
user_id | integer | No | User ID to log time for. Defaults to the authenticated user. Requires admin permissions to set for other users. |
list_time_entries
Section titled “list_time_entries”List time entries in your Harvest account with optional filters by project, user, client, task, or date range. Returns paginated time entries with hours logged, notes, billable status, and associated project, task, and user details.
| Name | Type | Required | Description |
|---|---|---|---|
project_id | integer | No | Filter by Harvest project ID |
user_id | integer | No | Filter by Harvest user ID |
client_id | integer | No | Filter by Harvest client ID |
task_id | integer | No | Filter by Harvest task ID |
from | string | No | Start of date range in YYYY-MM-DD format |
to | string | No | End of date range in YYYY-MM-DD format |
is_billed | boolean | No | Filter by billed status |
is_running | boolean | No | Return only active running timers when true |
page | integer | No | Page number for pagination. Defaults to 1. |
per_page | integer | No | Number of results per page (max 100). Defaults to 100. |
list_projects
Section titled “list_projects”List all projects in your Harvest account with optional filters. Returns project details including name, client, budget, billing method, start and end dates, and active status.
| Name | Type | Required | Description |
|---|---|---|---|
client_id | integer | No | Filter projects by client ID |
is_active | boolean | No | Filter by active status |
updated_since | string | No | ISO 8601 datetime — return only projects updated after this timestamp |
page | integer | No | Page number for pagination. Defaults to 1. |
per_page | integer | No | Number of results per page (max 100). Defaults to 100. |
list_users
Section titled “list_users”List all users in your Harvest account with optional filters. Returns user profiles including name, email, roles, and weekly capacity.
| Name | Type | Required | Description |
|---|---|---|---|
is_active | boolean | No | Filter by active status |
updated_since | string | No | ISO 8601 datetime — return only users updated after this timestamp |
page | integer | No | Page number for pagination. Defaults to 1. |
per_page | integer | No | Number of results per page (max 100). Defaults to 100. |
get_user
Section titled “get_user”Retrieve a Harvest user profile by user ID, including name, email, roles, weekly capacity, and avatar. Use list_users to discover user IDs. Requires the Harvest-Account-Id header returned during OAuth.
| Name | Type | Required | Description |
|---|---|---|---|
user_id | string | Yes | Harvest user ID |
get_company
Section titled “get_company”Retrieve the Harvest company (account) information for the authenticated user, including company name, base URI, plan type, clock format, currency, and weekly capacity settings. Takes no parameters.
scrape_profile
Section titled “scrape_profile”Scrape a LinkedIn profile by URL or public identifier, returning contact details, employment history, education, skills, and more. Provide either profile_url or public_identifier. Use main=true for a simplified profile at fewer credits. Optionally find email with find_email=true (costs extra credits).
| Name | Type | Required | Description |
|---|---|---|---|
profile_url | string | No | Full LinkedIn profile URL (e.g., https://www.linkedin.com/in/satyanadella). Use this or public_identifier. |
public_identifier | string | No | LinkedIn profile handle — the slug after /in/ (e.g., satyanadella). Use this or profile_url. |
main | boolean | No | Return a simplified profile using fewer credits (~2.6s). Defaults to false (full profile, ~4.9s). |
find_email | boolean | No | Attempt to find the contact’s email address. Costs extra credits per successful match. Defaults to false. |
scrape_company
Section titled “scrape_company”Scrape a LinkedIn company page for overview, headcount, employee count range, follower count, locations, specialties, industries, and funding data. Provide one of company_url, universal_name, or search.
| Name | Type | Required | Description |
|---|---|---|---|
company_url | string | No | Full LinkedIn company page URL (e.g., https://www.linkedin.com/company/microsoft). |
universal_name | string | No | LinkedIn company universal name — the slug after /company/ (e.g., microsoft). |
search | string | No | Company name to search for. Returns the most relevant match. |
search_people
Section titled “search_people”Search LinkedIn for people using filters such as job title, current company, location, and industry. Uses LinkedIn Lead Search for unmasked results. Returns paginated profiles with name, title, location, and LinkedIn URL. All parameters are optional and comma-separated for multiple values.
| Name | Type | Required | Description |
|---|---|---|---|
keywords | string | No | Free-text search terms matched against name, headline, and bio |
title | string | No | Job title filter (e.g., VP of Engineering). Comma-separate multiple values. |
company | string | No | Current company name filter (e.g., OpenAI). Comma-separate multiple values. |
location | string | No | Location filter by city, state, or country. Comma-separate multiple values. |
industry | string | No | Industry vertical filter. Comma-separate multiple values. |
page | integer | No | Page number for pagination. Starts at 1. Defaults to 1. |
search_jobs
Section titled “search_jobs”Search LinkedIn job listings by keyword, location, company, workplace type, employment type, experience level, and salary. Returns paginated job listings with title, company, location, and LinkedIn URL.
| Name | Type | Required | Description |
|---|---|---|---|
keywords | string | No | Job title or skill keywords to search for (e.g., machine learning engineer) |
location | string | No | Location filter by city, state, or country (e.g., New York, NY) |
company | string | No | Filter listings by company name (e.g., Stripe) |
workplace_type | string | No | Filter by workplace arrangement: remote, on-site, or hybrid |
employment_type | string | No | Filter by employment type: full-time, part-time, contract, temporary, volunteer, internship |
experience_level | string | No | Filter by seniority: entry, associate, mid-senior, director, executive |
salary | string | No | Salary range filter (format varies by region) |
page | integer | No | Page number for pagination. Starts at 1. Defaults to 1. |
scrape_job
Section titled “scrape_job”Retrieve full job listing details from LinkedIn by job URL or job ID. Returns title, company, description, requirements, salary, location, workplace type, employment type, applicant count, and application details.
| Name | Type | Required | Description |
|---|---|---|---|
job_url | string | No | Full LinkedIn job posting URL (e.g., https://www.linkedin.com/jobs/view/1234567890). Use this or job_id. |
job_id | string | No | LinkedIn job listing ID. Use this or job_url. |
bulk_scrape_profiles
Section titled “bulk_scrape_profiles”Batch scrape multiple LinkedIn profiles in a single request using the HarvestAPI Apify scraper. Accepts a list of LinkedIn profile URLs. Returns an array of profile objects in the same order as the input. Each profile counts as one credit.
Pricing: $4 per 1,000 profiles; $10 per 1,000 profiles with email. Requires an Apify API token.
| Name | Type | Required | Description |
|---|---|---|---|
urls | array<string> | Yes | List of LinkedIn profile URLs to scrape. Each entry must be a full URL (e.g., https://www.linkedin.com/in/username). Maximum 50 URLs per request. |
apify_token | string | Yes | Apify API token. Obtain from console.apify.com/settings/integrations. |