Skip to content

Research Engineers/Scientists

Generative AI, Large Language Models
Are you our Next Visionary?

We are seeking an innovative and seasoned technology back-end development leader who is passionate about the future of Artificial Intelligence. Join our rapidly growing company and help to position us for success in this hands-on role.

Who is Openstream.ai?

Openstream.ai® enables enterprises to engage in meaningful and fluid conversations with their audiences across modalities, channels, and languages with our visionary multimodal, plan-based Conversational AI platform, Eva (Enterprise Virtual Assistant). The platform is finely tuned by world-class AI experts to orchestrate the latest AI approaches and tools to deliver the best conversational experiences for their end-users and businesses. 

Openstream.ai has been included in over 20 Gartner analyst research reports in 2023 alone and was named as the sole Visionary for the second consecutive year in the Gartner Magic Quadrant for Enterprise Conversational AI.   

Key Responsibilities
  • Research and Development: Conduct research and develop pipelines to build LLMs on text, and possibly other data modalities such as speech, image, and video.
  • Model Development: Build, test, and deploy LLMs for real-world applications, including machine translation, conversational agents, and question-answering.
  • Collaboration: Collaborate with cross-functional teams including engineering, product management, and user experience teams.
  • Code Development: Write efficient, maintainable code for implementing prototypes and production-level models
  • Designing data collection/annotation solutions and systematic evaluation necessary for developing and maintaining production systems.  
Preferred Qualifications
  • PhD or Master’s degree in Computer Science, Machine Learning, NLP, or a related field.
  • Strong programming skills in Python, PyTorch, C++, or other relevant languages.
  • Demonstrated experience in designing and implementing scalable AI models for production.
  • Deep technical understanding of Machine Learning, Deep Learning architectures like Transformers, training methods, and optimizers.
  • Practical experience with the latest technologies in LLM and generative AI, such as parameter-efficient finetuning, instruction fine-tuning, alignment techniques such reinforcement learning with human feedback (RLHF), advanced prompt engineering techniques like Tree-of-Thoughts, RAG, and tool-augmented LLMs.
  • Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), HuggingFace libraries, etc.

Interested candidates are to be available to work from the Melbourne, Australia office and be able to join us immediately.