We worked with a healthcare startup on a mission to offer healthcare professionals with an improved ability to remotely monitor at-home patients. Specifically, the target was to prevent patient falls — a billion dollar cost to the healthcare system. Reducing patient falls would reduce injury and the associated costs, and would improve overall patient experience in acute, home care, and long-term care environments.
To deliver on this mission, state of the art IoT devices and cameras were applied to track patient vitals. Bitstrapped, using the data feeds supplied by the devices, developed and trained computer vision machine learning to accurately detect patient falls. This enabled the monitoring systems to promptly communicate with healthcare staff so they could effectively respond when an accident occurred.
To make this possible, Bitstrapped utilized the Google Cloud Vertex AI to train models, generate predictions, and automate pipeline orchestration for data intake and processing. Bitstrapped architected the cloud infrastructure to automate the hard parts of operating machine learning in production, so that the healthcare professionals could focus on patient outcomes.
The immense challenge for computer vision algorithms is determining the positioning of a human in the imaging. For this customer, the algorithms their data science team were testing were not yet consistently successful in accomplishing this. Bitstrapped would need to help with accuracy and performance.
The labelling solution at the time was manual and not workable in a team setting. The images had to be manually downloaded and there was no effective method to manage the images in a high volume setting. It would be critical to improve this in order to label and re-upload data at scale. A proper labelling solution would give the certainty to whether a given image represents a patient in an upright position or fallen on the ground. Bitstrapped would need to improve the labelling practices to ensure accuracy of the models.
There was also a lack of production pipelines necessary to allow teams to track models, experiment, iterate on new data, and retrain models. Bitstrapped would need to architect new infrastructure to meet the nuances of production machine learning.
Finally, these models were required to run at the edge, so on or near the IoT devices. At the time, ad hoc commands were being issued to run the models. This solution would not scale, so Bitstrapped would need to implement automated orchestration.
Bitstrapped started by designing a production MLOps process to ensure end-to-end model training for the fall detection algorithms. The chosen architecture for pre-processing and data ingestion was serverless, powered by IoT core for the deployment of models to the edge. Significant effort went into the implementation of transfer learning with use of state-of-the-art object detection models, optimized with TFLite for edge deployment. As images were collected, the architecture would facilitate data augmentation and preprocessing of the images in order to speed up downstream training pipelines.
Once the ML pipelines were in place focus shifted to the accuracy of the object detection model. Injecting a new process for labelling data would help with this. Specifically, the goal was to implement a semi-automated labelling solution that would detect failures and allow for humans in the loop as needed. This way, the team labelling could be efficient in labelling images, while also improving the ability to do meta learning and make use of previously trained models to label new datasets. Label automation was achieved by integrating a custom instance of Label Studio into the architecture and MLOps process.
For the data science team, the new infrastructure for labelling was architected on Google Kubernetes Engine and included production KubeFlow. Vertex.ai and Kubernetes were implemented to operationalize the ML pipeline, pre-process, train, and deploy the models. This included Vertex AI notebooks, flexible compute for their experimentation dynamic, Jupyter notebooks, and GPU enablement as needed to accelerate training time.
The results can be summarized in two major areas. The first big win was auto-labelling, which saved significant time in model training. The second win was frequency at which models could be trained. These capabilities were realized using cloud infrastructure automation.
With the labelling solution, there was no longer a need for on-premise labelling and training. The end-to-end MLOps pipeline for fall detection included an auto-labelling ability with Label Studio. When labelling was completed, a hook into Kubeflow would be triggered to automatically train the model.
The most advanced piece of the architecture was meta learning. New models from new label training were derived from meta learning. Effectively models were doing the labelling themselves instead of the ML team. The humans in the loop capability was simply used to verify the models.
This practically fully automated process freed back all the time previously spent manually downloaded, labelling, and uploading data. As a result, the same teams could focus on the accuracy of the models and new model development.