So it’s about training the LLM to do less, not more. Restrict its scope to the subset and a handful of nuances specific to this bit of SAAS and how they implemented Jinja. Not gigabytes of data to train, but just a few dozen documents with examples. The code LLM should retain some ability for HTML and CSS code.
A bit like a recent article about Microsoft Wavecoder LLM where they kept things compact but useful.
The model needs to run somewhere in the cloud that is affordable; I’ve read about Refact, Predibase etc. but there’s probably more options. Also OK if we run it in Google Cloud, AWS or Digital Ocean. Initially it needs to run affordably, doesn’t have to be very fast so no all-day big GPU rentals.
During finetuning, me/us training the model, we don’t need public access to it. The end deliverable needs some basic Web UI where visitors can chat with the model to help them with their HTML, CSS en Jinja. I’d like for it to be able to capture these chat prompts so we can monitor the effectiveness and correctness of the output and continue finetuning based on real usage.
Typically, this Jinja is used to build say product recommendation blocks for email or websites. The HTML and CSS will dictate a grid layout and the Jinja loops over a list and fils the rows and columns. Other use cases without HTML and CSS is that the Jinja does some data checks, data manipulations like formatting of email address strings, first and last names and telephone numbers etc.
In short; we take an existing programming oriented Open Source or affordable LLM, finetune it to reduce its scope to a subset of Jinja, then we make a simple Web UI for end users to chat with this.
Hourly Range: $20.00-$45.00
Posted On: February 09, 2024 08:50 UTC
Category: Full Stack Development
Skills:ChatGPT, AI Model Integration, AI-Generated Code, LLM Prompt Engineering, LLM Prompt, AI Code Generator
Country: Netherlands
click to apply
Powered by WPeMatico
