Skip to main content Skip to secondary navigation

LLMs and AI integrations

Main content start

Overview

Spezi incorporates several modules to facilitate user communication with the app and increase accessibility. This includes a template chat interface, speech-to-text functionality, and applications of large language models.

Features

LLMs

LLMs are incredibly powerful assets that can be used for a variety of applications, and their use has greatly increased in the healthcare setting. Patients may have general questions about their condition in between visits to the clinic, and it may be difficult to contact their provider regularly with these questions. LLMs have the potential to serve as an in-between tool, with the ability to answer certain questions and suggest various resources to patients. This tool will primarily be an assistive tool, and the information given from the LLM should still be confirmed with the healthcare provider, but it can offer further transparency to a patient about their care.

3 iPhone screenshots with examples of conversations with LLMs, including OpenAI and local models.

Fog-layer Language Model

There are two primary concerns when applying LLMs to health care: 1) minimizing erroneous information, and 2) compromising patient confidentiality and security. Cloud-based LLMs are prone to errors with granular details, and the opacity of cloud providers makes compromise of secure information relatively likely. Edge computing partially solves these problems by processing data locally, but lacks the computing power that standard LLMs have. In order to take advantage of both the cloud and edge models, Spezi LLM uses an additional fog layer, which increases the computing power while bringing data processing nearer to the source and helps secure patient information. As a result, it is possible to customize the cloud-based LLM, rather than only using Open AI's models.

LLMonFHIR and HealthGPT are two applications that specifically use patient data, allowing patients to query or interact with their FHIR and Health App data, respectively. Both use the additional fog layer, which increased output generation speed compared to local models.

LLMonFHIR
HealthGPT

Chat

Spezi provides a template chat feature, similar to the appearance of iMessage. This feature incorporates both standard text input on the keyboard and speech-to-text functionality. For the first use, the app will ask for both microphone and speech recognition permissions from the user.

This can be applicable for provider and patient communication, which will be secure within the app, but it can also be used within the chatbot.

Three iPhone screens all demonstrating a chat functionality, with blue text bubbles for your responses and gray bubbles for the other person's responses. First screen shows the chat itself, second screen demonstrates text input on the keyboard, and the third screen demonstrates voice to text.

Spezi Speech

Speech-to-text and text-to-speech functionality are very important accessibility feature which can be helpful for several disabilities, including deaf users and patients with mobility issues that may have a hard time typing. Users can even customize their Personal Voices in Apple settings for further customization in text-to-speech that can simulate voices from specific people.

iOS app screenshot for Spezi Chat, similar to messaging platform with blue text bubbles for user's text and gray bubbles for responders. At the bottom, text input box with an orange microphone on the right side that's listening for voice input.

Technical resources

For developers, directly access the Spezi Chat, Spezi Speech, and Spezi LLM modules and issues pages from here.

GitHubIssues Page
Spezi ChatSpezi Chat Issues
Spezi SpeechSpezi Speech Issues
Spezi LLMSpezi LLM Issues

FAQs