Amazon’s re:Invent 2019 conference in Nevada started spectacularly, with huge announcements about products made during a midnight keynote.
Amazon Web Services (AWS) introduced Amazon Transcribe Medical, which is a new edition of their Transcribe voice recognition tool that allows developers to introduce medical speech to text functionabillities to their applications, and it debuted DeepComposer, which lets AWS users to compose music by taking use of AI and a physical (or digital / virtual) MIDI controller.
Amazon Transcribe Medical features an API which integrates with voice-enabled apps and is compatible with the majority of the microphone – equipped devices on the market. It is meant to transcribe medical speech for primary care, according to Amazon, and also to be deployed “at scale” over thousands of health care facilities in order to improve note-taking for clinical staff by supporting both medical dictation and conversational transcription. Transcribe Medical will also share many features with the base Amazon Transcribe, like automatic and smart punctuation.
Management of the service
Transcribe Medical is pretty much standalone, in the sense that it does not need provisioning or any management of sorts. All it does is send back a stream of text in real time.
The service is also covered under AWS’ HIPAA eligibility and business associate addendum (BAA), which means that any user who enters into a BAA with AWS can utilize Transcribe Medical to analyze and deposit personal information regarding health.
According to Amazon, Amgen and SoundLines are already utilizing Transcribe Medical to generate text transcripts from recorded notes and output transcripts into downstream analytics, with great efficiency:
“For the 3,500 health care partners relying on our care team optimization strategies for the past 15 years, we’ve significantly decreased the time and effort required to get to insightful data, ” said Vadim Khazan, president of SoundLines technology, in a statement.
The availability of Transcribe Medical in the AWS regions U.S. East (North Virginia) and U.S. West (Oregon) follows months after Amazon released three of their AI – powered, cloud – hosted products, like Transcribe, Translate and Comprehend, which are all compatible with the Health Insurance Portability and Accountability Act of 1996, commonly known as HIPAA.
HIPAA is the main law providing data privacy and security reserves for medical data in the United States.
It’s remarkable that Amazon isn’t the sole tech company working on speech recognition products that are aimed towards the health care segment: Earlier this year, Microsoft stated that they are planning to team up with Nuance to develop AI software capable to understand patient – clinician conversation, that could ultimately be integrated with medical records. Philips, one of the main rivals has been offering tailor-made automatic transcription solutions for such situations for a long time now.
Earlier today, AWS presented DeepComposer, which they called “the world’s first” machine learning – enabled musical keyboard, featuring a set of 32 keys, two octave setup, designed to help developers prototype either pre-trained or custom AI models.
Composers first have to record a short musical tune (or sample an existing one) before selecting a pattern for their desired genre, as well as the architectural parameters of the model alongside the loss function, which outputs the difference between the output of the algorithm and the predicted value.
Afterwards, they pick hyperparameters, which are set before the learning process commences and a validation sample. After all of this is done, DeepComposer outputs a composition which can be streamed in the AWS console or shared to an external source.