After months of weekly manual checks for new job postings – I started playing with AI agents like Zapier to search the job boards. This post focuses on how I built and use my Zapier agent.

Every morning, I have an email in my inbox from my Zapier agent, providing a list of the new job board postings that match my specified criteria. A screenshot of this morning’s email is on the right.
- The agent checks different job boards each morning
- It evaluates new job postings against provided criteria
- It sends a brief summary of what it has found, along with job posting URLs so that I can further evaluate the posting with my Claude career coach and take action.
Compared to its n8n sibling, the Zapier agent much simpler for the non-developer – there’s no hosting, no coding, no data schemas, no APIs or HttpRequests. It’s easy to get going and easy to maintain because…well…there’s no code and it’s completely self-contained in Zapier. And the emails it provides are fantastic.
But that simplicity means that you are 100% at the mercy of the LLM and how the LLM decides to spend your limited Zapier Activities. I spent a LOT of time wrestling with the prompt to minimize the number of times that it used its tools and arguing with the LLM for misinterpreting/ignoring parts of my instructions (…but at least the LLM was very cheerful about how it was ignoring me). Note that your experience may be better if you opt to pay for the service and have access to additional Agent Actions. I’d love to hear suggestions or additional ideas from others who are also using Zapier Agents.
Overall, this solution is very easily ‘good enough’ for the average job searcher, and I highly recommend it.
Configuring the Zapier agent

My Zapier job agent is simple, and I recommend that you start simple to keep it within the boundaries of the free tier limits, which is bounded by Agent Activities – the free tier provides 400 activities each month and 10 continuous activities per execution.
The Agent has four parts to the configuration:
- It is scheduled to trigger execution every morning
- Instructions drive execution using a detailed prompt
- The agent relies on the following tools:
Zapier Tablesserves as the agent’s memory – remembering what job postings it has already processedVisit Sites & Web Searchenables the agent to interact with the webGmail: Send Emailenables the agent to provide a daily summary
- I’m not using any knowledge sources [yet]
The agent currently consumes about 8-13 activities each day, keeping me below my 400 monthly limit.
The job board search prompt
The prompt is straightforward and follows this general flow:
- Perform a web search using a tailored search query
- To minimize actions, I search different job boards based on the day of the week
- Remove URLs that we have already processed
- Retrieve previously processed URLs from the data cache
- Normalize the URLs to remove query parameters and common URL variations (e.g., job-boards.* vs boards.*) that tend to show up over time
- Compare normalized URLs against those in the cache
- For URLs that are new, add them into the data cache and process the write
- Fetch and evaluate the new job postings – I’ve blogged about this in a couple spots and won’t repeat myself here. For a deep dive, check out How to use Clause in a Job Search
- Send an email summary of the findings
Expand to see the full agent prompt
Trigger: Every morning at 8:00 AM.
Step 1: Find New Job Postings on Google and collect URLs without fetching content yet.
a. On Mondays, Wednesdays, and Saturdays use the following search query and collect the top 20 result URLs
site:greenhouse.io intitle:("jobRoleFunctionalParts") AND intitle:("jobRoleParts") AND intext:developer -intitle:sales
b. On Tuesdays and Thursdays, use the following search query and collect the top 20 result URLs
site:lever.co intitle:("jobRoleFunctionalParts") AND intitle:("jobRoleParts") AND intext:developer -intitle:sales
c. On Wednesdays and Fridays, use the following search query and collect the top 20 result URLs
site:myworkdayjobs.com intitle:("jobRoleFunctionalParts") AND intitle:("jobRoleParts") AND intext:developer -intitle:sales
d. On Mondays, Wednesdays, and Fridays, use the following search query and collect the top 20 result URLs
site:ashbyhq.com intitle:("jobRoleFunctionalParts") AND intitle:("jobRoleParts") AND intext:developer -intitle:sales
Step 2: Optimize URL deduplication with batch processing:
a. First, retrieve ALL existing URLs from Zapier Tables 'Processed URLs' table in a single batch call to create an in-memory list - use 'http' as the common substring for the queries
b. For each URL found in Step 1, normalize it by:
- Converting 'job-boards.greenhouse.io' to 'boards.greenhouse.io'
- Removing all URL parameters (everything after '?' or '#')
- Converting to lowercase
- Removing trailing slashes
- Ignoring postings that use localization strings other than 'en-us' (skip URLs with 'pl-pl', 'el-gr', 'th-th', etc.)
c. Compare each normalized URL against in-memory lists (Zapier Tables) to identify truly new URLs - no additional API calls needed
d. Only proceed with content fetching for URLs that are NOT found in the in-memory list
e. For all new URLs not in Zapier Tables, batch create records in Zapier Tables with the normalized URL and current date
Step 3: Fetch and validate job posting content ONLY for new URLs:
For each job posting URL that was not found in either database, fetch the web page content and validate that the job posting satisfies the following job search criteria:
a. REQUIRED CRITERIA (ALL must be met):
- Experience: Minimum [years]+ years stated requirement (if range given, use minimum). Roles requiring <[years] years are AUTO-REJECT.
- Seniority: [Titles], or clear equivalent. "Manager" roles are AUTO-REJECT unless explicitly "Senior Manager" at Fortune 500 company.
- Function: [Types of roles] role. AUTO-REJECT: [Types of roles]
- Technical Audience: Must focus on developers, engineers, or technical decision-makers. AUTO-REJECT: General marketing, consumer, or non-technical B2B audiences.
- Location: United States, Canada, or explicit remote-first for US timezones. AUTO-REJECT: Asia-Pacific, Europe-only, or requiring relocation.
b. PREFERRED CRITERIA (strongly favor):
- [years]+ years experience requirement
- Series B+ companies or established enterprises
- Platform/API/Infrastructure/Cloud/SaaS companies
- Clear product marketing responsibilities
- Team leadership component
c. AUTOMATIC DISQUALIFIERS:
- Expired/deleted job postings
- Agencies, consulting firms, or non-product companies
- [verticals]-focused companies
- Roles requiring relocation outside US
- "Ninja," "Rockstar," or similar unprofessional language
- Contract/temporary positions
Step 4: After processing all job postings, send an email to yourself of the newly discovered posts that are relevant.
Expected Outcome: An email containing a summary of all newly discovered and relevant job postings that were added to your Airtable data table, sent every morning at 8:00 AM.
Zapier makes it really easy to create and configure your agent, and so I’m not going to go through the process here. If you want more detail on this, let me know directly or in the comments below and I can flesh this out further.
Managing your limited actions in the Free Tier
From the outset, the number of Agent Actions have been my nemesis in building with Zapier. It can do a lot, but each of those actions are going to cost you. ^_^ But, as they say, the obstacle becomes the way.
For me, I had to manage two parts of the action economy (#IYKYK) – keeping my monthly consumption under 400 and my daily consumption under 10. My daily workflows bounce between 8-15, but they will pause themselves once they pass 10 activities in a run the following message:
I’ve reached the limit of 10 activities per run on Agents’ Free plan. Upgrade to a paid Agents plan to unlock up to 40 continuous activities per run. Shall I continue with the rest for now?
Unpausing it is easy, you open up the activity entry for that run and click ‘Yes, continue’ and it picks back up where it left off.

How I drove down my action consumption by focusing on my tool calls:
- I created a data store to retain processed URLs
- I originally did this in Airtable because that was where I was tracking my applications, but I moved to Zapier tables because Airtable was unable to filter the number of columns returned, and continued to consume more memory and actions than needed.
- I started doing a processed URL check for each URL, but moved to a ‘bulk download’ and ‘bulk check’ to dramatically decrease the number of actions I used each day
- I also upload all of the new URLs in bulk to the data store, which saves a lot of actions
- Until last month, I used both Zapier Tables and Airtable in my queries – Zapier to keep everything local and Airtable as a secondary check to not reprocess results found by my n8n agent…but the Airtable API limits caused me to toss out the Airtable data store entirely
- My URL processing optimizes for filtering first
- I explicitly tell the LLM to NOT DOWNLOAD THE PAGE CONTENT when doing the web searches — yes, it did this. And yes, it took me many discussions to get it to stop. Each one of those web page views/downloads costs you an action
- I grab the top 20 results, normalize the URLs down to their base form, and THEN filter out all of those we’ve seen before
Unfortunately, I can’t document how much these changes have saved me over time because Zapier Agents doesn’t retain the Activities count in the job history beyond a couple weeks.
What’s next
Although most of my time is spent working with my n8n agent, I really love Zapier Agents and I’ll continue to experiment with the platform here and with other tasks.
If you have further tips or want to let me know how this worked for you, I’d love to hear from you. You can reach me directly, via LinkedIn, or via the comments below.