Found LangChain’s DeepAgent package. It’s built on LangGraph and inspired by Claude Code and Deep Research. Built a job search agent to try it out.
About DeepAgent
DeepAgent is a standalone Python library for building agents that handle complex, multi-step tasks. It comes with three core tools:
- write_todos - Breaks tasks into steps, tracks progress, adapts plans as new info comes in
- File tools -
ls,read_file,write_file,edit_filefor context management - task - Spawns subagents with isolated context
Built on LangGraph for graph-based execution and state management. Supports long-term memory through LangGraph’s Store - agents can save and retrieve info across conversations.
What the Agent Does
The job search agent takes a CV, extracts skills (both explicit and inferred), then searches for matching jobs across multiple sources. Ranks results by relevance.
The flow:
- Orchestrator receives CV path
- Spawns CV parser subagent → extracts skills, experience, preferences
- Spawns job search subagent → queries Tavily, Brave, Firecrawl
- Collects results, ranks by relevance score
- Writes final output to file
What Made It Click
Sub-agent coordination - The task tool spawns specialized subagents with context isolation. One handles CV parsing. Another handles job searching. Main agent stays focused while subagents do specific work. You define what each agent does, not how to do every step.
File system for context management - DeepAgent uses file tools (read, write, edit) to offload large context and prevent token overflow. The agent reads CV, writes intermediate results, manages state through files. Feels natural. Real work, not just chat.
DeepAgent framework - Built-in planning tool breaks tasks into steps. Task tool spawns subagents with context isolation. The framework handles orchestration so you focus on agent logic.
Tech Stack
- DeepAgent for the multi-agent framework
- DeepSeek-V3 as the LLM (cost-effective, works well)
- Tavily, Brave, Firecrawl for job search sources
Why This Architecture Works
The subagent pattern keeps things clean. CV parsing is its own isolated context. Job searching is separate. The main orchestrator does not get polluted with all the details. Each agent has a focused job.
File-based context prevents token overflow. Instead of stuffing everything into the prompt, the agent writes to files and reads what it needs. Works well for processing longer CVs.
The write_todos tool is useful for breaking down the job search into steps. Parse CV → identify skills → search jobs → rank results → output. The agent tracks its own progress.
What I Took Away
DeepAgent makes building agents feel clean. Subagent spawning, file-based context, planning tools. The pieces fit together well.
The LangGraph foundation means you get state management and persistence for free. Could extend this to remember user preferences across sessions.
Good framework to have in the toolbox.
Repository
https://github.com/Rahat-Kabir/job-search-agent