Tuesday 24 March 2015

Supporting clinical research; a conversation with LifeQ

The age of biometric data is upon us, but the science is not ready to explain what the data means. In some ways it is really exciting, the potential is huge- even if we ignore the marketing materials and focus on the potential long term use of simple data collected under real world conditions. Stephanie Lee from buzzfeed had a sobering analysis of Apple's new Researchkit that the healthcare and clinical research value of the data is pretty much zero. I completely agree with (see my thoughts on Apple's foray into healthcare here). The only group that might see some value is the same group that has access to healthcare and quality jobs (see here for primary data from Pew Institute). This means that the biometric data pulled from "iEcosystem" will not reflect the population that acutely needs to be understood biometrically. (I'll provide a detailed example of the issues later in this blog.)
In my opinion; any data that is tied to a specific mobile device or "Internet of Things" (IoT) object is useless for healthcare unless it can be compared and combined on aggregate across devices and demographics.
It reminds of the mid-nineties when genomic sequencing was going to revolutionize healthcare and disease treatment. Twenty years later and we are finally realizing that the genome is an almost irrelevant piece. That the context of how that genome is read, acted upon by the cell, and communicated between cells is more important then any point mutations or small scale genomic changes. (I have written about thishere, in the context of cancer.)
The genomic age was necessary to spark the discoveries that are starting to change healthcare but the changes in healthcare won't be realized because of genomic biology. It seems to me that we are at the same crossroad with the Internet of Things (IoT). The technology is really cool and the visualizations are solid but......what do I do with it?
For example, I have a Fitbit it has literally change my activity due purely to trying to get to 10,000 steps......I think thats a good thing- I mean I lost weight, my back is better.....but I am left wanting more, what types of activity are related to my weight loss? Have I gone far enough-am I at lower risk for all of the things that I worry about from a health perspective?
I can tell you the data collected by my Fitbit is pretty useless to answer these questions. I downloaded it all ran it through a few different statistical models and guess what? None of it appears to be relevant to my on-going good health. I still use my Fitbit to track my activity but I have no illusions about the role that the collected data plays in my healthcare decisions.
I recently had a chance to talk to a really interesting start-up company called LifeQ. LifeQ (@LifeQinc) has restored some of my enthusiasm for IoT and real impactful changes in healthcare. LifeQ has taken a different approach to the internet of things. LifeQ owns intellectual property on for an optical sensor that uses light waves to penetrate the surface of the skin to monitor multiple biological measurables; heart rate, blood pressure, oxygen saturation, with other important measurables such as glucose in the beta testing phase. The real power of LifeQ is not the measurables. Most of the metrics that their sensor measures are relativelty common place. Many devices can measure heart rate, blood pressure, glucose, these are not unique. The true value of LifeQ as a IoT vendor is really in the predicative models and software that allows identification of changes in ones own biology. As Christopher Rimmer pointed out this very similar to the model that Google, Microsoft and Apple have pioneeered. LifeQ owns the core data acquisition ("OS") and the core platform for integrating and using the information ("search engine"). If LifeQ can be half as disruptive in healthcare as Google has been in mobile, they can be a driving force for systemic cost reductions and better treatment outcomes.

The device agnostic approach gives LifeQ a wide potential market in healthcare, fitness as well as the flexibility to weather the inevitable changes to the device ecosystem that end users are willing to use. The focus on data acquisition and analysis reduces the overhead and ensures that the width and breadth of data needed for accurate modeling can be gathered.
As LifeQ told me during our conversation "You can't build a great, high quality algorithm and data access AND build multi-functional devices at the level required to collect the data we need. There are plenty of companies in the health, medical and consumer device world with the pockets and desire to build high quality readers."
It is a really smart strategy, especially in the complex global healthcare and lifestyle market(s). Focusing on their strength and being choosy about the partnerships. This strategy allows LifeQ to ensure data quality and more importantly from a medical perspective, information security. High quality data that is combinable across devices is necessary to keep the predicative models relevant, and increase in accuracy with successive iterations.
Obviously the key risks are in how to ensure the partners continue to innovate on the physical devices and the integration of different device collected data into a single model. To keep with the Google analogy how do you build the back end to protect against the fragmentation of the device type when each device manufacturer has specific needs and market segments. The kind of companies that they are dealing with understand the necessity of spending on the hardware.
Not surprisingly the initial partnerships are consumer focused, within the personal potential niche, for example those that cater to extreme athletes. Some wider consumer focused. An interesting aspect will be how LifeQ can integrate the niche data into the predicative model without biasing against normal peoples fluctuations. For example, we know that part of what makes elite atheletes, well elite, is speed of recovery; their heartbeat decreases at rest faster, their rate of breathing decreases faster, muscles recover faster. So as LifeQ collects this data, what value does this data have for us "normals" will the models be accurate?
It is not an insurmountable challenge but the awareness of how the data can influence the model and vice versa is a concern for any IoT or Quantified Self technology. It is the early adopter problem, your initial feedback from fanboys and people who share your vision can blind to the general publics use cases and expectations. It is the exact problem that caused Google to shutdown Glass.LifeQ seems quite aware of the potential founder effect problems.
A more important (to me at least) is that they are also engaging the medical community to enable the kinds of use cases that have long term quality of life and better diagnostic test values for health monitoring. These kinds of markets are a growth market and can provide a reliable revenue stream. For example, at home monitoring or ambulatory care for basic monitoring of HR, breathing, Oxygen levels, blood glucose (coming soon). All of which can be monitored today by the LifeQ powered devices. The problem being that the current monitors that have the accuracy that LifeQ needs are cumbersome but they are easier to where than what most hospitals have- and can be worn for long periods of without patients being strapped to wires or stuck in bed. The potential for clearer test results under real conditions is tantalizing.
What is next?
Like all start-ups LifeQ is focusing on ensuring their product is the best by ensuring that every element that could negatively affect its core product. The really interesting piece will come from the meta-analysis once the number of users hits a large enough N to ensure predictability across populations.
LifeQ acknowledged the potential limitations of a optical sensor; skin color, lean muscle to fat ratios, as well as stability issues cause be user activity. They are working expanding the repetoire of sensors that LifeQ can collect data from as part of the platform.
The real issue that faces LifeQ and any of the more robust quantified self devices and analysis platforms really comes down to action steps. For that matter the same issue exists for personal genomics. What is the line between variation of the population and dangerous biometric signature? Is there more harm then good from telling folks everything?
LifeQ has a great platform, and appears to have all the pieces in place to be the "Google for healthcare." They certainly bear keeping an eye to see what they do next

Tuesday 17 March 2015

Information Security and medical devices

Lately I've been thinking about consumer focused medical devices. I am a Fitbit user and only every access the information on my cell. I do not actively share my information in their communities but I assume that Fitbit uses my information, in aggregate, to make money.
I get it, they are an for-profit company and I am receiving a ultra cheap service, Fitbit needs to make money on that service somehow.
Now that I've got the niceties out of the way the rest of this blog is angry and rant-y. In terms of full disclosure some of my anger comes from the recent kerfuffle over General Mills' plan to treat social media as a binding contract to protect itself from litigation. My fright over the new internet of things and the push for an exponential growth in apps and developers by 2020 -which BTW is not that long from now.
Anytime we expand something gets left out....and usually in software development it is security and customer privacy (see here for a good summary of where we are at with App security). We already know that many vendors do not take information privacy and governance seriously. Look at the recent Anthem disclosure...and they know they are subject to HIPAA.
The Market potential is enormous
The potential market for technologically enhanced medical devices is huge. (The market breakdown and total value are Espicom numbers. All analysis of the potential of technology to replace or enhance each category is mine. The Canadian market numbers are easily available-and could be verified, most estimates put the US market at 10-100 times the size)
I estimate at a $2 Billion dollar market in Canada based on current trends and the potential for technology to enhance the current medical devices when broken down by category. If we assume that some of the devices will collect information this puts the potential software market at $1.2B in Canada alone. The opportunity is enormous...for both commercial adventures and the rampant loss of patient privacy.
A lack of responsibility on the part of software companies
The other part comes from my attempts at conversations with a couple of consumer focused but information sharing applications. One company provides a cloud based service that allows doctors and medical students to share patient information, including pictures, with other doctors. It is a great premise BUT......what protection is there for patients? I contacted the company and basically their protections are focused on their own bottom line, the have a "policy in place that meets all legal obligations in their local jurisdictions".....THEY ONLY HAVE POLICY......they also would not disclose and had no plans to be proactive about applying any technical protections to block the sharing of patient data.
Am I the only person that has a problem with this?!? This is a product designed for medical personnel to SHARE patient information and they have no plans to protect the data!@!
As we move forward into the wearables and internet of things era, what are the obligations to these companies that hold personal information? We, as consumers, need to hold customer facing companies responsible for protecting customer information. Doctors and people within the software industry in particular should be absolutely ashamed of the state of the medical device and health app security. Both groups have actively undermined efforts to enact better regulations by complaining that it will kill the industry. Here is a note if you lose private health information- it will kill your company. Shouldn't a company that provides a service that enables the sharing of medical information as accountable? Should they be allowed to merely point to a piece of "paper" and say "not our problem?!?" If your policy says that the user must comply with hospital regulations on patient data sharing, you should provide the hospital a method to enforce policy. As a patient I need to know that the med student is not sharing pictures of my serious and potentially embarrassing problem just to have a laugh with their friends. It is the reason that Box is such a fast growing product! It gives end users what they need and it gives the business the protections it may need. In this age of PRISM and companies selling your data (see here for stats), wouldn't it be a marketing advantage to tell customers you go beyond the minimal standards that were set for a paper based age?
Here is my POV on this: If you are enabling sharing of a person's medical information-whether they are your customer or the patient of a customer's- you are obligated to protect that data from the stupidity or laziness of your users. How many busy residents are really going to take the time to ask their patients if they can share the x-ray? Especially if they can control capture and share from a personally owned device? Should I as a patient be forced to spell out the conditions under which I will allow students and doctors to share my information? Is that really a winning strategy in an age of user experience? There should be no such things as Facebook, Dropbox or Google drive for doctors! At a minimum you should provide hospitals the option of enabling controls based on their policy and not just weasel out of it by throwing your hands up and saying hey we did our job, they told us that it was alright.

Thursday 1 May 2014

The obligation of mHealth vendors to protect patient information

Lately I've been thinking about consumer focused medical devices. I am a Fitbit user and only every access the information on my cell. I do not actively share the information in their communities but I assume that Fitbit uses my information, in aggregate, to make money. I get it, they are an for-profit company and I am receiving a ultra cheap service, Fitbit needs to make money on that service somehow.

Now that I've got the niceties out of the way the rest of this blog is angry and rant-y. In terms of full disclosure some of my anger comes from the recent kerfuffle over General Mills' plan to treat social media as a binding contract to protect itself from litigation. The other part comes from my interactions with a couple of consumer focused but information sharing applications. One company provides a cloud based service that allows doctors and medical students to share patient information, including pictures, with other doctors. It is a great premise BUT......what protection is their for patients? 

I contacted the company and basically their protections are focused on their bottom line, the have a "policy in place that meets all legal obligations in their local jurisdictions"......they also would not disclose and had no plans to be proactive about applying any technical protections to block the sharing of patient data.

Am I the only person that has a problem with this?!? 

As we move forward into the wearables and internet of things era, what are the obligations to these companies? 

We hold customer facing companies responsible for protecting customer information. Shouldn't a company that provides a service that enables the sharing of medical information as accountable? Should they be allowed to merely point to a piece of "paper" and say "not our problem?!?" 

If your policy says that the user must comply with hospital regulations on patient data sharing, you should provide the hospital a method to enforce policy. As a patient I need to know that the med student is not sharing pictures of my serious and potentially embarrassing problem just to have a laugh with their friends. It is the reason that Box is such a fast growing product! It gives end users what they need and it gives the business the protections it may need. 

In this age of prism and companies selling your data (see here for stats), wouldn't it be a marketing advantage to tell customers you go beyond the minimal?

Here is my POV on this: If you are enabling sharing of a person's medical information, you are obligated to protect that data from the stupidity or laziness of your users. 

How many busy residents are really going to take the time to ask their patients if they can share the x-ray? Especially if they can control capture and share from a personally owned device? Should I as a patient be forced to spell out the conditions under which I will allow students and doctors to share my information? 

There should be no such things as Facebook, Dropbox or Google drive for doctors! At a minimum you should provide hospitals the option of enabling controls based on their policy and not just weasel out of it by throwing your hands up and saying hey we did our job, they told us that it was alright.

Tuesday 8 April 2014

Clinical data random information

I've become an information hoarder. As I spend more time thinking about Information Management and speeding the move to better technical systems, I am amazed how general the principals of design are between the different industries.

Here is a noobs (i.e. me) "plain spoken" understanding of a key term in managing patient data across hospitials and for predicative analytics and personal health decison making.

Level setting (i.e. in general the definition of Clinical data warehousing) Clinical data warehousing is a patient identifier organized, integrated, historically archived collection of data.

For the most part the purpose of CDW is as a database for hospitals and healthcare workers to analyze and make informed decisions on both individual patient care and forecasting where a hospital’s patient population is going to need greater care (i.e. patient’s are showing up as obese; therefore the need for specific hospital programs to fight diabetes are a good idea).

Data warehousing in healthcare also has use in preparing for both full ICD-10 and meaningful use implementation. For example; McKesson through its Enterprise intelligence module probably has plenty of CDW management capabilities the only interested in meeting the upcoming ICD-10 and meaningful use deadlines. These kinds of worries are only for US hospitals. However since Canada requires ICD-10 compliance for all EMR systems this does present a benefit to Canadian healthcare.

In principal since data warehousing at its core is about building a relational database and should be EMR supplier agnostic. Since McKesson is an ICD-10 and meaningful use- ready supplier, the database itself should conform to standards that would allow general solutions to be used. This article goes through some of the potential benefits and pain points. It is tailored to clinical trials but the underlying message that building a CDW is a ongoing procedure is the same for other uses.

One example of how this may be done is Stanford’s STRIDE; they used HL7 reference information model to combine their Cerner and Epic databases. This is part of a larger opensource project that may be an option if an organization has some development expertise.

Since the main user of CDWs tends to be the people doing the analysis (current buzzwords for search for analytics include:BI, Predictive analytics, enterprise planning, etc) it is probably useful for Health IT professionals to understand its WHO and WHAT the CDW is for within the organization...i.e. have a full blown Information Governance plan that places a value on information not just a risk assessment. 

Friday 28 March 2014

Security without usability isn't better healthcare

I spend a lot of my time understanding how information is stored, accessed and protected as part of my role as a IT analyst. I always am astounded at how little of what is standard practice in many industries as not filtered over to health care and/or life sciences (Pharma+Biotech+academia).

The recent hub-bub about ACA (AKA Obamacare) has completely yelled over the real transformation opportunity in healthcare. Up until the recent deadlines and political fights regarding ACA "everyone" was really concerned about meaningful use. The TL;DR version of the MU legislation is this: make information available to care providers and patients.

So what are we really talking about here? It is really pretty simple; it is information management and the processes that guard against mis-use while enabling productivity.

Lets be honest the EHR/EMR solutions implemented at most organizations do not enable productivity or protect information. Doctors hate them because they do not fit their work patterns (see here), hospitals are have significant issues with data protection (see here) and importantly it is not mitigating the biggest risk to patient outcomes (and hospital liability) (see here).  

It is time to re-think the information silos in healthcare.

So if a single poorly accessed EHR is not the answer, what is?

I would argue that we need to think about this based on information flow and how we expect the value to be delivered. In this case patient care.

An interesting model to think about is the Canadian delivery model. For example; Ontario E-health has determined it is neither cost effective nor timely to build a single system for every hospital.  At the moment, 70% of all physician practices and hospitals already have some sort of EHR system in place. So rip and replace is not an option, the reality is we need to make lemonade.

Since Ontario funds the hospitals through direct allocation of tax revenue, it is loathe flush that money down the drain. 

Therefore the best approach is to control the data itself (including digital images, prescription history, surgery, etc) and letting the individual hospitals control how they view and use the data. 

In other words- Make it easier to access information based on who you are and what you need the information for!

Focus on the Information exchange layer

Consolidated Information Management layout for Patient care focus. 
So how do we do this without moving to brand new systems and shiny new toys?

The same way every other industry is doing it; especially low margin high risk industries such as Oil and gas, Insurance and Manufacturing. Keep the clunky but very secure system and take advantage of the new technologies that enable information sharing. Instead of all-in-one solution add an ECM or portal to manage rights, search and presentation. It will be more cost-effective than doing nothing or rip and replace.

This structure controls movement and access to patient data, allowing for quick access to the appropriate information based on job and location.  It provides a structure that takes advantage of the current investment in a secure database yet provides a flexible layer that is designed to convey information in context for end users. 

This may not be the best system or the system that you would design from scratch with an unlimited budget, but it gives a long term flexibility AND doesn't require a rip and replace of your current EMR/EHR. It should provide very good, highly usable healthcare at a reasonable cost.

The way they are going about the change may not be splashy but it will work for both patients and doctors- that’s a great thing. The one thing it won’t fix is the doctors who refuse to use it-and that is a bad thing.

There is additional cost involved in this model but if teh doctors and nurses do not use what you have now.....would salvaging that investment be better?

Love any comments or critique of the model.

Saturday 1 February 2014

Big data is just a euphemism for lazy and cheap

Maybe I'm getting cantankerous but I'm really over all of the talk about big data and how it is going revolutionize the world businesses are going to so efficient they will only need a CEO and a lowly marketing guy. Governments will so efficient taxes will be almost unnecessary. 

Enough! The reality is that big data isn't new and most organizations are not mature enough or focused enough to take advantage of the new technology. 

Learn the lessons of the past.
I was (am) a scientist. I did my Ph.D in neuroscience and genetics back when sequencing a single gene took months. For reference, the bleeding edge technologies can deliver a whole genome (about 20 thousand genes) in 15 minutes

I have already complained about the challenges in knowledge management in science - and the parallelism in businesses today in this blog. I'll summarize; businesses suck at getting the right information to workers because they are cheap and lazy. 

No one wants to pay to do it right, everyone thinks that the app should be cheap and reduce labor cost by reducing the need to hire smart people. 

Well folks organizing and analyzing data/information is hard and takes a deep understanding of the difference between junk and INFORMATION.

The original Big data problem
Scientists have always generated large, complex data sets that are almost too difficult to comprehend.
As we enter the genomics era in science it has gotten worse because most scientists have not taken the time to do quality control on the information that they submit to public databases. The public data is very spotty at best; how many scientists can honestly say that they trust the gene ontology notes?

N.B. For non-scientists the Gene ontology database is a repository of notes, data, or published papers about our combined knowledge of each gene's function, interactions and chemical inhibitors. It contains links across species and across several databases.

The problem is that it is incomplete NLM/NIH does not have the money to maintain it-nor do any of the primary owners. The pace of growth is to much for the curators to keep up with. The number of different sources has also grown, you have images, gene expression studies, drug testing, protein interaction maps. 

Science has had a big data problem since before computers. How has the scientific community moved forward and had success even in the face of such poor data stewardship?

People.

Anyone how gets through a Ph.D has a great analytical mind. They can see through poor quality data to those nuggets of truth. How do they do this? They focus on finding an answer to a question, and then they build out from that question until they have built a complex multifaceted answer.

You wan to know why science is becoming stagnant and have serious ethical and just plain stupid errors of reproducibility?

We do not train scientists to be critical and form questions. We teach them to get a whole lot of data and mold it into a a beautiful story. The logic being that if you look at enough data the truth will come out. It never does; if you start out with biased data you will get a biased answer. The data sets are inherently flawed.

There is no big data only poorly framed questions. If you have a big data problem it is because you have been a poor data steward and you don't have a question. so you have no ability to start sifting through information.

Their has always been a lot of information it is just That we trained people to work with it, understand it, analyze it and make decisions. More importantly we understood that failure was a good thing, it is a chance to define the question and focus on things that will work.

A lesson not learned
There is no such thing as big data, just better storage of the vast amounts of information that life generates. Nothing has really changed it just the problem is more visible-and we downsized all of the keepers of the knowledge. Most organizations- healthcare and Pharma being the key culprits refuse to train people to think critically and scrutinize the veracity and quality of information/data.

You want to fix the big data problem? Train people to ask questions and let them answer the question. Or hire someone well trained already such as the overstocked "bioinformatics Ph.D" class of scientists. The biottom line is that new shiny system is still going to give you crap data if the person asking the question is can't ask good and insightful questions.

Realize that autocorrect is the state of the art in predictive analytics right?......let that sink in for a minute. Are you will to leave your career or company to this?

You don't need more data, you need the right data and the time and confidence to fully vet the quality of the data. We need people that understand today  to test how well that information fits with the world today. This is a key element of accurate predictions

In biomedical sciences this really comes down to how we train graduate students; do we make them learn statistics or just hope that excel is good enough? Are we willing to mentor students or are they just cheap labor for the gratification of the professor? Do we pay attention to how we store and mange information so that the next student can find it?

For most businesses it comes down to why? Is there a business question that we need to solve, what is the problem that we need fix, is there a new source of revenue that we can exploit? What are our past failures and what can we learn from them?

Tuesday 21 January 2014

Twenty skills that I -or any Ph.D- has that are in demand

A while ago Christopher Buddle posted a blog on SciLogs about what you needed to know before becoming a professor. Many of those skills are the ones in demand outside of academia. 

It got me thinking generally what skills I have amassed over a Ph.D, Post-doc and faculty position. For any other "recovering scientists" reading this please feel free to steal this list, add to it or perfect it. Any comments or critique would be welcome. 
  1. Project managementover my academic career I managed to publish several papers in top journals. Some required precise planning of tasks and experiments on a short deadlines against competition. This requires ensuring that each set of experiments is finishes with a high quality deliverable.
  2. Human resource- as a professor I had to hire, fire and develop staff. This included students and early career professionals where you are balancing what they are capableof today, with their career goals. I picked projects for them that they matched their skills.
  3. Project planning- a PhD is a set of projects, that need to be planned out, with a full timeline, deliverables and costs set out. In addition a key part of a successful PhD or post-doc is knowing when to kill a project.
  4. Stakeholder relationship- each stage of a PhD requires you to set out goals with your faculty advisory committee. These people will provide guidance and advice for where you should spend your time. Part of success is ensuring that you cogent show progress toward each of the members ideas ofyour success. The stakes get higher as you move to a post-doc where you are expected to manage the project and manage the expectations of your boss.
  5. Budget building- as a professor I needed to build RFPs, prioritize purchases based on project needs-as well as the long term strategy of the lab, source infrastructure, mange vendors and raise funds.
  6. Publications- part of a scientists job is to communicate results to the community. This includes typical writing skills but also graphic design, matching the presentation visualizations to the message and audience.
  7. Data management- all aspects of data management including ensuring high quality data recording metadata, designing database considerations. Build database querying, integrating public and owned data into a complete set.
  8. Analytics- a key part of my PhD was defying how to quantitate behavior and images. This requires a clear analytic method that allows reproducibility through clear, logical rubric for scoring purposes.
  9. Web based research-not just the query but also the decision on good sources and bad ones.
  10. Public speaking- I have given hundreds of lectures to all sizes of groups both lay groups and expert groups. This gives me a large set of tools to fall back on for presentation design
  11. Individual drive- to do a PhD you need to an internal drive to do what must be done.
  12. Intellectual flexibility- as part of my PhD I learned at least 12 different technical skills at a high enough level to use them in peer reviewed publications and teach them to others. I learned these through reading and just dpingi didn't need to be walk through them multiple times.
  13. Records management- my laboratory work in a high demand, high competition environment. We needed to have all experiments documented in a way that would stand up to legal review and could be used as part of a patent process.
  14. Understanding of several healthcare related regulations- part of my work was related to drug discovery and some of it was in collaboration with clinicians. Meaning that we ensured that all documents and protocols met the required standards.
  15. Graphic design- genetics is a hard area to explain without pictures. I designed many successful visualizations using Photoshop, powepoint and old matte photography techniqies.
  16. Process design- my laboratory was at the bleeding edge of genetics. This meant that we were constantly building new processes and testing resources that would be best for that process.
  17. Process optimization- due to the unique methods we constantly needed to set production standards and build analytics that allowed us to evaluate and optimize process and make changes that reduced cost and increased reproducibility and accuracy.
  18. Contract negotiations-as part of my job, I have negotiated service contracts, terms of employment 
  19. Fund raising- academic labs are also look for new sources of funding and interacting with potential investors/funders
  20. Strategic product planning -a key part of success is understanding where government priorities are now and the next five years to develop a funding strategy. Successful scientists also have a understanding of the competitive landscape and position their employees and infrastructure to keep up.