Projects
Hyper Parameter Optimization
Surrogate-Based Optimization of TimeConstrained Hyperparameter Tuning in Deep Learning – Project Link
Brief Abstract –
A surrogate-based framework for hyperparameter optimization (HPO) under strict time constraints in deep learning model training – a novel idea – is proposed in the paper.
The proposed method introduces a smooth penalized objective that jointly considers model accuracy and training time, enabling efficient hyperparameter selection with reduced computational overhead.
A comparative study of classical optimization algorithms—Newton’s Method, Quasi-Newton (L-BFGS-B), and Conjugate Gradient—is conducted to determine the most effective solver for this surrogate objective.
Experiments across prototyping, research, and production scenarios demonstrate that the surrogate-based formulation is both efficient and practical, with L-BFGS-B emerging as the most suitable optimization technique for the task.
RAG Framework
Modular Retrieval-Augmented Generation Framework – stratovector.ai
Brief –
StratoVectorAI is a highly configurable, end-to-end Retrieval-Augmented Generation (RAG) framework that enables seamless ingestion, processing, indexing, and querying of structured, unstructured, and multimodal data. Built for production-scale AI systems, it supports pluggable components, metadata intelligence, and cloud-native deployment.
Finetuning LLMs
Finetune your LLM/SLMs with ease – TuneLLMs
Fine tuning LLM/SLMs is a task that still remains with people with expertise in AI Engineering.
TuneLLMs makes the finetuning of LLM/SLMs comparatively easy for people with no/less experience with the AI world.
Access to stratovector.ai and TuneLLM is restricted for the public as the work is in progress. For collaboration reach out to me on linkedin.