Client & context
A US-based company was building an HR information system (HRIS) to centralize candidate, job, and application data and use LLMs to support candidate assessment. Their data lived partly in a legacy Ruby system and partly in uploaded resumes and documents processed by a Python/OCR pipeline.
I joined as a cross-functional software engineer to help connect these pieces and make the LLM workflows reliable and usable in production.
Challenges
- Legacy data locked in a Ruby system and not easily available to the new HRIS.
- A Python-based resume/OCR/LLM pipeline that needed better error handling and robustness.
- Need for per-job configuration of LLM requests (different roles require different prompts and rate limits).
- Lack of a structured feedback loop to improve candidate assessments over time.
- Requirements around GDPR compliance, data handling, and documentation.
- HRIS and LLM services had to run reliably across staging and production AWS environments.
What I did
1. Data import from legacy Ruby system
- Built a Laravel command to import candidate, job, and application data from the legacy Ruby system into the new HRIS.
- Ensured the data model in Laravel could support future features such as analytics and reporting.
2. Hardening the OCR & LLM pipeline
- Fixed and enhanced the Python resume processing logic, including:
- More robust error handling.
- Fallback to different OCR engines when one fails.
- Improved rate limiting and load balancing for LLM/API calls.
- Reduced failure modes and made the system more predictable under load.
3. Configurable LLM attestation per job
- Added configuration options to let the HR team control LLM request parameters per job:
- Different prompts for different job families.
- Adjustable thresholds and scoring criteria.
- Enabled more context-aware assessments instead of a one-size-fits-all prompt.
4. Feedback loop for candidate assessments
- Implemented a feedback mechanism so users could mark LLM assessments as positive or negative.
- Feedback was stored and made available for:
- Improving prompts and configuration.
- Understanding where LLM outputs needed human correction.
5. Slack-based reporting and environment support
- Implemented a Slack bot integration to send daily reports to tenant users, providing:
- Summaries of new candidates and applications.
- Status of processing jobs and any issues.
- Participated in AWS staging and production environment management, helping to:
- Diagnose and resolve infrastructure issues.
- Keep environments consistent and documented.
Results
- The HRIS gained a reliable data import pipeline from the legacy Ruby system.
- The OCR + LLM resume processing became more robust and operationally safe.
- Recruiters and hiring managers could benefit from job-specific LLM assessments with clear feedback loops.
- Slack reporting improved visibility into what the system was doing and where attention was needed.
- The platform moved closer to a production-ready, intelligent HRIS rather than an experimental prototype.
Tech & responsibilities
- Role: Cross-functional software engineer for LLM-based HRIS
- Technologies: Laravel, Ruby, PostgreSQL, Python, OCR engines, LLM APIs, Slack API, AWS (staging & production)
- Scope: Data import, pipeline hardening, configurable LLM flows, feedback mechanisms, Slack reporting, and environment support
If you’re building an HRIS or similar system and want to use LLMs in a controlled, production-ready way, I can help design and implement the necessary integrations and safeguards.
Back to all case studies