Who we are

We are a consulting and technical outsource firm based in Ho Chi Minh City, Vietnam dedicated to helping businesses solve problems & achieve transformations via integration with NLP AI technologies

Our current engineers hail from various backgrounds and technical capabilities, led by our founder & Head of Engineering - Mr. To Huy

Get in touch today to discuss how our expertise and solutions can be customized to match your needs. Our dedicated team is ready to assist with any inquiries

Placeholder

Our Founder

Mr To Huy

Mr To Huy is our founder and also our Head of Engineering. He has had nearly a decade of experiences in the field of Data Science & AI, working in Singapore across various startups and technical R&D departments inside large multinational companies. Originally a graduate from Nanyang Technological University, Mr. To soon found his calling in AI and eventually became a Principal Engineer specialized on AI & Data Solutions at Panasonic prior to founding TPH Consulting.

With TPH Consulting, he hopes to channel his passion for building exceptional products into helping businesses fully take advantage of AI, to grow and achieve business success as well as transform lives on top of the backdrop of what many refer to as “The Next Industrial Revolution”

Our Technical Capabilities

Integrations & Frameworks

  • Integrations with LangChain + LangGraph

  • Integrations with Vector Databases (Pinecone/Chroma/Weaviate)

  • Integrations with Ollama or vLLM or Closed Source Models

  • Integrations with Huggingface ecosystem

  • Integrations with Gradio/Streamlit for MVP development

  • Integrations with CopilotKit for Site-native Chatbot UI/UX

  • Integrations of AI logic into Python/NodeJS Backend

Technical Expertise & Focuses

  • LLM Model Finetuning on Proprietary Data (Texts)

  • Advanced NLP App with RAG (Retrieval Augmentation Gen)

  • Intelligent Agentic Chatbot

  • Prompt Engineering

  • Quality Dataset Curation for AI Training

  • Other NLP-related tasks & applications

LLMOPs/AIOPs & Application Deployments

  • Deployment on the cloud with AWS/GCP

    • using Docker Images hosted inside EC2 Instances

    • using Severless Architecture

    • using AWS Bedrock/GCP Vertex

  • Deployment on the cloud through Huggingface’s Inference API

  • Integration with Huggingface TGI Framework

  • Monitoring application performance using LangSmith

Get Started