Showing posts with label open access. Show all posts
Showing posts with label open access. Show all posts

Monday 30 January 2017

Whatever happend to cancer and bad luck? The story of known unknowns

Back in 2015, Bert Vogelstein et al, wrote a really interesting theorectical paper regarding essentially "What we don't know about cancer." 

We sort of already knew what caused bad luck but we can't really articulate what it is.

 (The "Bad Luck" article itself is behind the paywall but this editorial is a really good primer on what was ACTUALLY said by the authors)

I believe, like many in my field (when I still had one) that bad luck is simply physiology that we do not understand (Yes a direct rip off of Arthur C Clarke)

Vogelstien et al caused a media explosion to start off 2015 with the bad luck line- Forbes, Huffington Post have run with it.

A year later we haven’t heard much about the stochastic models (a fancy word for random) While hesitate to wade back into this morass of jargon, layman interpretation and poor analogies- I will. I think it is important to recognize that the authors were in no way suggesting we (society) should just stop trying to prevent cancer or that individuals have no way to decrease their risk.

I also think it is important to note that this was not about media attention. Bert Vogelstein is one of the fathers of cancer research- He doesn't need media attention to get funding. The point the scientists are trying to make is that some types of tissues make more copies and that is why certain tissues have higher rate of cancer. They also acknowledge that there are areas where we have limited research and therefore may be the source of "bad luck."
One of the biological control mechanisms that is part of the randomness of cancer is an area of study call Epigenetics. This new area of study controls mutations, rate of cell divisions, amongst other "things", in a cell specific manner.

What is Epigenetics?

In a practical sense epigenetics is an in-depth look at how genes regulate each other. Genes are, at their strictest definition, the precursors to the proteins (or certain RNAs) that perform (most of) the jobs that make life possible. While genes are the stars of the show, in terms of regulation of biological processes, they are the least interesting part of the genome. The interesting part is the so-called junk DNA or more accurately non-coding DNA-as in it does not code for a protein. If genes are the stars, then non-coding DNA are the role players and the scenery that move the show forward.

The dance of how these proteins and their modifications is choregraphed is WHAT epigenetics does for the cell.

Non-coding DNA essentially act as the context for why a gene should expressed; for example certain cell specific genes will be epigenetically regulated so as to NOT be expressed in the wrong tissue. This occurs through a series biochemical and physical changes to a gene that as a group are as a group considered epigenetic regulators.

Epigenetic processes are the key to providing organisms the flexibility to respond to the environment. So what about these processes make them so important? As with any regulatory process-it depends.

The best analogy is the "genome as the book of life". If genes are the words by which an organism is "made" then epigenetics is everything else in the book. (see here for a presentation on this topic)

[I prefer the script and scenery analogy above but the book analogy fits better with the general "Book of Life" analogy for DNA]

 At the lowest level it is the sentence structure that allows the words to have meaning. At the highest level it is the chapter order that gives the story a linear order. Unlike a real book each tissue in the body can shuffle each of these elements on the fly....a choose your own adventure book based on cell type.

Keeping with this analogy the various tools that each cell uses to keep track of their progress though the choose your own adventure would be the epigenetic machinery. This machinery provides, the grammar, syntax and paragraph structure that allows the cell to respond to each potentially different path.

As anyone who has every read a choose your own adventure, part of the fun is tracing back and making a different decision. In a cell this ability to track back is vital as it allows certain cell types to re-populate after injury or during normal growth. To bring it back to cancer, the flexibility that is inherent in a choose your own adventure book leaves cells vulnerable to errors.

Imagine that each cell has their own copy of the “Book” and everytime they divide they have to make a copy of the book for their kids. The only problem is that they have to do it by had one letter at a time. Obviously, there are going to be errors but for the most part they do not change the word or change sentence meaning. On occasion though the errors do change sentence structure or alter a word. In a nutshell this is any disease; an alteration of the “Book.” Cancer is in someways a step further, just like there are verbs, nouns, adjectives, etc in language there are categories of genes. When the growth category of words are altered you get cancer or when you alter the grammar (epigenetics) that provides rules for “cancer words” then you get cancer.

What we still don’t understand is how many words need to be altered or how much can you bastardize the grammar of the genome before you get cancer. That is part of why cancer appears to be bad luck, we simply do not understand the language or the grammar rules sufficiently to judge the quality of words that we find in cells.
To put in plainer; we don’t understand genetics or epigenetics well enough yet to make predictions.

The deeper dive:

While the book analogy is interesting and useful, the beauty of the system in my opinion is that all of this is managed through biochemistry: small differences in enzyme kinetics and subtle changes in protein binding. This biochemistry leads to a wide variety of markers that can be used to denote parts of the genome.

The cell uses different types of biochemical markers for each of the particular categories; grammar, pages, chapters. Each of the different markers has a different eased of use – sort of like an e-book where how you can jump as a reader is very dependent on how the author thought you would move through the book. The ease of use is a function of the biochemical processes by which the cell adds or remove the markers. If this all seems like overkill just to express a gene...it sort of is.

All of the added layers and differing ease of use allows the cell to add very tight regulatory control. This allows the cell to mix and match different markers to make bookmarks or highlight favorite passages, often used paragraph, etc. In general the chapter markers are direct methylation of the DNA. As the name suggests it is the addition (or subtraction) of methyl groups to DNA. This alters the affinity of DNA to a subset of proteins that occlude the DNA-basically hiding the genes. Removing the methyl groups abolishes this occlusion.
Sentence structure this is more complicated but in general there are 2 types of post translation modifications that are most commonly seen as having a definitive role in this process. This usually involves the structural proteins that surround DNA. These proteins are the histones and they allow 2 linear metres of DNA to be packed a volume of 0.00001 metres. An amazing feat in and of itself! Histones can be modified in a wide variety of ways too numerous to mention here. (see here for a review)

Full disclosure; my laboratory studied the role of histones in epigenetics so I am biased when in comes to how interesting these proteins are in the scheme of life

The combinatorial placement of these proteins and modifications on DNA allows for a very precise grammar to be used. So precise that cells can communicate exactly which protein should be expressed to their neighbor. Given that there may be as many as 31 thousand that is quite impressive. What are histones? that is another story for another post. The bottom line here is that these proteins control access to the DNA- as well as generally protecting cells from misusing cancer genes and/or accidently erasing instructions.

As always I love feedback and this is one of my works in progress.

Saturday 3 November 2012

Funding research in the new (poorer) world


The world has changed. Money is tight, the large foundations and governmental granting agencies are risk averse........which means senior scientist will get the $. 

This means that innovation is going to die. Once you get to be a senior scientist you don't have time to fail you have to feed the beast. I think the best allegory is Wall St.- Yes "to big to fail, I need a bail-out Wall St." is exactly what most large labs in the world are!

Rather than launch on some Quixotic diatribe about how bad this is for health and science as endeavor,  Ill just talk about how to make it irrelevant.

The answer is small foundations. They have the focus, passion and community to start a real long term relationship with young scientists. Brand new, too ignorant to know better scientists who got their own lab by being the most innovative and best prepared post-doc is exactly the one who will take the chance-if their is money involved. 

All foundations seem to be focusing on drug discovery and biomarkers. This is a great space for smaller disease focused foundations to occupy. The problem is that most of the organizations do not have the gobs of money to follow it through in a comprehensive manner.


Getting effective novel therapeutics requires engagement of talented, creative scientists


For disease focused research foundations this can be difficult. In the current economic environment foundations must have some method to "hook" scientists whether it be large per year grants, limited restrictions on spending or speed of review. 


For rare diseases this means getting in their early phase when they shape and limit the vision of their lab. Rare diseases research requires passion and a way to find general funding that will maintain the lab. 


In this day and age when peer based mentoring is sadly lacking a foundation that can provide some guidance on where/how their researchers can leverage funding and expertise can gain loyalty and expand by word of mouth. This will then lead to larger "sexy" studies and fund-raising. 


This kind of thinking can work hand in hand with maximizing fund raising as you can tell fund raisers that a large portion of their money goes directly to attempting to cure or ease their disease rather than greater good. 


Overall:Small foundations should look to maximize the effect of their funds to effect change in their specific disease. 

Through targeting 2 areas of research:
1 Cellular characterization of the disease (ie what are properties that are different between normal & disease)
2 Drug/therapeutic design and testing-no matter how speculative. This assumes that the idea or test system can pass peer review 


The 2 areas would have separate competitions, 2 separate funding paradigms:

1 Short funding cycle-a micro-finance model. Short grants with quick turn around. 2 year grant with  a hard progress report with mutually agreed upon measurable progress. 

2 A prestige grant larger "no questions asked" funding 5 year funding with no reporting for 2 years. Again mutually agreed upon defined goals that MUST be met to receive final 3 yrs of $    


The grants would be open academia and small biotech. There would also be a bonus for academic lead Pharma-academic RFPs. There would be a significant and clear partnership NOT just "in-kind" contributions. 


The research and fund-raising would have a high degree of back and forth. The foundation would hold a stakeholders conference where selected funded scientist would come and explain the state of research in layman's terms. 


There has to be greater out-reach from the scientists at foundations. The Office of the CSO should engage in various forms of social media to engage and find funds (with the guidance of the Exec board). This can no longer be left to that summer intern who just finished Bio 101. The public is too smart for that and frankly if I was looking for a small foundation for funding I want to know that the scientists are engaged. It should be an expectation not just a hope that scientific merit is judged by scientists. 

Sunday 16 September 2012

Time for some convergent evolution in knowledge management

As I move from the ivory tower of Neuroscience to the practical, business related advice that Info-Tech gives clients on their IT environment I'm amazed at how many parallels I see in the needs and the solutions in all kinds of human endeavours.

For example, I just finished talking to a vendor about how Enterprises can manage and maximize their content (images, documents, blogs, etc). Much like my own thinking on this, @OpenText is convinced the core issue is about information movement not what it is stored in (i.e. a word doc VS a excel).

For me this comes back to a practical problem that I had as a graduate student. My Ph.D was on how gene expression relates to brain development. The brain develops in a fascinating manner; it starts out as a tube that grows outwards at specific points to build the multi-lobed broccoli-esque structure that allows all vertebrates but particularly mammals to have diverse behaviours and life long learning.These complex behaviours rely on an the immensely diverse set of brain cell types. Not only is their great diversity of cells but each cell needs to get to the right place at the right time.

Think of a commute on the subway; not only do you need the right line but if you don't get to the station at the right time you won't get to work on time. This could lead to you getting fired. For brain cells this could lead to death. For the organism it could mean sub-normal brain function-and potentially death. The fact that the process works is a testament to the astounding flexibility and exception management built into cells by their epigenetic programming.

There is however one big problem with the type of brain development: the skull. The skull limits the number of cells that can be created at any given time. Practically this means that the level of control that must be exerted on the number of any one cell type is very tight.The control comes from coordinating which genes are expressed in each cell type to allow cells to make decisions on the fly. Usually it starts by the brain cells take off in a random direction that then informs them of what type of brain cell they will end of being when they arrive. The cells then proliferate as they move based on the contextual information that they receive about how many more cells are needed. This all happens through cell to cell communication and rapidly changing patterns of gene expression.

(Wait for it Ill get back the parallel problems honest....)

As you can imagine this was (and still is) a daunting problem to investigate. My research involved a variety of time staged images; reams of excel workbooks on cell counts, brain size; word docs on behaviour and whole genome expression sets. It was the a big data problem before the phrase existed. (Business parallel No.1). In reality I had no problem keeping track of all this data and looking at each piece and doing the analysis on each piece. I had very good notes(metadata) and file naming conventions (classification) to ensure that I could easy find the file I needed. I was in effect a content management system (Business parallel No.2).  The problem was synthesizing the separate analysis into a cogent piece of information i.e. something that can be shared with others in a common language that allows other to build their own actionable plan. (Business parallel No.3).

Any scientist reading my dilemma from 15 years ago can probably relate-and so can anyone else that uses and presents information as part of their job. The reality is that technology can only solve the problem if people recognize the problem and WANT to be systematic in their habits.......the will power to be repetitive in their approach to work is sorely lacking from most knowledge based workers. Ironically a lack of structure kills creativity by allowing the mind too much space to move within. The advent of the online databases by NIH from genomic, chemical and ontological data has given a framework for scientists to work within to quickly get up to speed in new areas of investigation. Unfortunately this has not trickled down to individual labs (again more proof that trickle down anything doesn't work effectively-its just not part of human nature).

This lack of shared framework across multiple laboratories is becoming a real problem for both Pharma and academia (and everyone else). The lack of system has led to reams of lost data and the nuggets of insight that could provide real solutions to clinical problems (Business parallel No. 4). This also leads to duplication of effort and missed opportunities for revenue(grant) generation.(Business parallel No.5).

From a health perspective, if we knew more about what "failed drugs" targeted, what gene patterns they changed and what cell types they had been tested on we could very quickly build a database. From a Rare disease perspective the cost of medical treatment is partially due to the lack of shared knowledge. How many failed drugs could be of use on rare diseases? We will never know.

This is a situation where scientists can learn from the business community for the technical tools to really allow long term shareable frameworks. These technical controls are available at any price. Conversely the frameworks and logic that scientists use to classify pieces of content to link them have lessons for any knowledge worker.

Its time for some open-mindedness on both sides, the needs for all kinds of organizations and workers are converging-too much data, too many types of data, not enough analysis. Evolution is about taking those "things" that work and modify them for the new environment.


Wednesday 7 March 2012

Open access and generating senior scientist buy-in


There is a vibrant, intelligent but completely impractical debate happening right now around publishing scientific papers just use #openaccess to see the volume of twitter posts. The concepts are great-faster dissemination of information, cleaner peer review process, greater collaboration. 

My issue is the lack of reality check or way to bring it into real world use. Science at its heart is a glacier- cold, unworried, progressing forward in an unstoppable manner. It also has the inertia of literally hundreds of years and in general a highly conservative set of guidelines. What needs to be fought against is this idea that peer review requires a third party outside of the scientific community. The internet and the transparency that it brings makes a third party paid watch dog unnecessary, there are plenty of folks on the internet just looking to "yell gotcha".

There is some value to the conservative mindset that works well for society and Progress-the burden of proof. The conservative guidelines protect Science from making too many mistakes and jumping to conclusions based on the unexplained. It's what prevents Einstein's theory from being torn down by a faulty wire. It also means that the younger generation of scientist must bring ideas to the hallowed halls of old science and prove that it will work by bringing real world suggestions that will fix the interwoven problems of publication, attribution, grants, jobs, and tenure. 

Otherwise its just pissing in the wind and complaining. Not my idea of what scientists do. They come up with theories, models and explanations-that are then roundly torn  down by their peers and rebuilt into a better product.

With that said lets take a look at how open access and "non-publishing" publishing could work in the real world*. 

First some challenges that I see as the main drag on process:
  1. Comparative analytics. There needs to be transparent metrics to gauge the value of the research to the larger community. This can be an issue since every scientist does the most important research to the world.
  2. Control over content-this may seem trivial but if I am the principal investigator I'm not sure I want every post-doc and grad student uploading their crappy, blurry images. The metadata surrounding who gets tagged, whats get tagged and the terminology is vital to ensuring that the data can be reviewed by everyone who may be interested. 
  3. Clear lines of permissions-What role do collaborators play in making this public who gets to post it? where is it hosted? The university still has some co-ownership. This is not something that can be decided afterwards it has implications for grants, etc.
  4. What about intellectual property? I'm all for open collaboration and sharing data with academics but what guarantees are there that pharma and biotech will play by these rules? The beauty of public research from the grantor's(read government) point of view is it maximises their investment. That is lost if [insert huge pharm company here] comes along and builds a drug based on that data and charge the public obscene amounts of money.
  5. The content that is made public has to have a finished look. This can't be just thought vomit. It must have some clarity of thought and citations. Put some thought into it! Distill by putting into context, why should I care? how does it support or counter the current models. Proper citation through hyperlinks. It should be peer review not crowd sourced editing.
  6. Control over comments. Comments can't be removed just because someone says your data quality sucks. This has to be transparent warts and all. You have to take reviewers suggestions of more experiments seriously. As someone who has reviewed for a variety of journals (top five to lower tier) nothing is more frustrating than taking the time to review and having someone completely disregard it. 
  7. Buy-in from senior scientists. Science is largely a oral history, the reality is that for most methods just having the paper is useless for understanding how to do it or where the problems can arise. The insights from senior scientists that have seen it all are required for this to be truly revolutionary. 
  8. Buy-in from Administrators. I for one do not believe that open publishing will be any less time consuming or cheap for the scientist. Someone will have to maintain the database for the content and ensure that there is enough capacity for the videos, excel files and what not that will be made available. This will have to be maintained for decades with back-ups, etc. Someone will have to know where and what there content there is AND when is should be taken down and replaced. Right now the university covers those costs for each departments website, unless there is a clear benefit i.e. grant money, donors, clarity on which faculty members are doing well and in a perfect world which require more help.

Now the potential solution:

A dedicated website where each lab publishes its own data. Personally I don't be believe that all data should be available but some labs and branches of science believe that is best. Allow the system to have some flexibility, some fields are inherently more competitive and technically nuanced than others. Scientists need to retain the ability to check data quality, accuracy and potentially fundamental flaws in experimental setup as a lab group prior to making it public. 

I like figshare and creative commons and all of the really great tools that are coming at breakneck speed. I love the idea of posting of all of the data but I truly believe that my job as a scientist is to analyse the data not just find/acquire it to send to others. If open access publication does not include this it will set back cancer and other highly nuanced fields in biomedical sciences years. These fields have moved at breakneck speed because such a premium is placed succinct analysis by the publishers. While I do not believe publishers should be the gatekeepers, I do not want to lose the analysis of data for the sake of speed of publication. 

The solution: most labs have university owned/manged websites where the Principal Investigator (aka PI, professor i.e. the person who's tuckus is on the line) owns the admin rights. These should become more of a real world home for the publication and sharing of lab data. It exists and some labs do a great job of updating it with content as it gets published. Everyone else needs to get on board bring it into the modern age with appropriate tagging and labels to ensure that it can be found through search engines.

PIs need to retain control, they are the ones that will be held accountable if the "published" result that shows a new cure for cancer that turns out a contaminant. The pervasive nature of the Internet means that the media has access to the data and need to hype themselves as getting the most interesting story. "Never let the truth get in the way of a good story" is a truism for more and more journalists. Accidents happen and while I'm alright with being embarrassed by my peers figuring it out, I'm not alright with it spinning into a worldwide story and having to explain that it was an "oops" to the public. 

Main point: Use the established website, each PI has admin rights to remove. Give senior lab members the ability to publish concise analysis with appropriate figures and links to the whole data set with metadata and clear descriptions. As part of the mentorship for new post-docs and graduates students training on what is considered an acceptable level of proof prior to making the data public. Laboratories are still training grounds, as someone who has trained students and post-docs the peer review process allows young scientists looking to move up to the majors an idea of what good science is-this cannot be lost by opening up publication. I love the citizen science movement but at the end of the day, like anything, there has always been a difference between someone who does something as a hobby and someone who has the discipline, passion and willpower to dedicate their life to a subject. Training and the culture of science has to be part of it-science is about justifying your opinion and the quality of data. 

Peer review isn't broken, the publishing models are, let's not throw the baby out with the bath water. The university controlled site allows for clear rules of engagement for pharma, media and allows for the level of control that PIs, chairs need to ensure that crappy data doesn't not spiral out of control into a scandal. The departmental chair and grant study groups can look at the metrics; website views, re-links, etc to allow flexibility into the systems for review whether it be tenure, grants or something else that no one has thought of. Open access will be a failure if it does not give everyone involved with the industry (yes its an industry! get over it people). It's not perfect but it can be piloted in a way that senior scientists from the core Cell, Nature and Science author pool can at least talk about. I think that many of the ideas that are being bandied about are much better than this for science as a whole and ultimately will be the long term solution. 

That being said I have yet to see an idea that any of the top level Cancer, Stem Cell, etc scientists will buy into. This is not a group to dismiss, they may not be the majority but they represent the main attraction for why scientists will not give up on the Elsevier or any other for profit publisher. They also are the presidents or senior leadership of some of the most influential universities in the world (Caltech, Rockefeller U., Memorial Sloan-Kettering, Max-Planck, etc). As a final reason to get them on-board they also are on the grant study groups and a variety of other activities that effect all levels of at least biomedical sciences.

At the end of the day the risk posed by publishing incorrect data needs to be balanced with greater access and conversation about what can be done next. Please comment as you see appropriate.


*Disclaimer-I only have experience with a limited number of institutes(eight) so I do not know if any of this is applicable widely. I have no idea if the issues that are bringing up are universal or limited to the institutes that I have worked at.

Wednesday 29 February 2012

Rare diseases and open access.


Ive found myself completely mesmerized by the open access/open science debate. As a recovering bench scientist, it has made me think about a variety of things but one that is really interesting is the implications for Rare disease research and speed of turning great benchwork into viable drug targets. Ill deal with the larger debate on open access separately but I wanted to put forward something today(Feb 28th 2015 Rare disease day). 

2016 update: In my opinion not much has change in Rare diseases in the last year. There have been some moves forward but like anything in the drug and/or therapeutic research- it is time consuming. I hope that the silence is because we are getting to the point where folks have rolled up there sleeves and are working not talking. 

I am really excited about the prospects for increasing the speed that potential drug targets can go from bench to bedside. The new technologies (gene sequencing, clinical data) can provide faster turn around time through efficient data sharing and new genomics technology. The real potential pay off is through new clinical data that will be available once EMR is implemented widely. The value of that much data combined with the new genome sequencing technologies can really provide some much needed guidance about the genotype phenotype relationships that may link certain rare diseases. I say may since it will really come down to data quality and wide dissemination of that data. Getting clinical data into the hands of molecular biologists and biochemist who can do the bench research is vital to drug design. 

2014 Update: With the roll-out of ACA starting to happen and the FDA crackdown on 23andMe. The landscape for studying and curing Rare Diseases just got a little better. For more information on the 23andMe nonsense there is plenty of information on the imbroglio but this one from the Huffington Post is the least sensationalist. My opinion is that the FDA made a decision based on the specific businesses lack of response it is not an indictment of consumer genetics or any paternalistic over-reach. Mathew Herper has a really great analysis of the stupidity and or hubris that 23andMe showed.  The Global Genes Project has a nice blog on the relationship between Rare diseases and ACA. 

The bad news is that the sequester has set research back years if not decades and may have very well rob a whole generation of scientists of their careers (this author included). Tom Ulrich of Boston Children Hospital has a nice blog on the subject.
\
2015 Update: The new interesting initiative is precision medicine in US. I really am proud of the way Global Genes Project is coming along. I was briefly involved when Nicole Boice started the initiative. I look forward seeing how it continues to grow in 2015. I think it does a great job of keeping the conversation on awareness and providing a site to aggregate "best practices" for the Rare disease community. 

I really hope that the continued access to healthcare that we started to see in 2014 continues. The key will be what do we do with the basic information that clinics gather about their rare diseases patients? How do we make that shareable across clinics. This in my opinion will be the key to consistent diagnosis and clear symptons, which will then better inform scientists of which genes contribute to the phenotype. This is the basis of drug discovery and treatments. 

Right now is a real nexus of information due to the convergence of new technologies with "new" fields of studies. Epigenetics is the study of how and why genes get turned, the best analogy is: if the whole genome is the book of life, genes are the words and epigenetics is the sentence, paragraph and chapter structure that gives the words meaning. The other area is off-shoot of stem cell research; induced pluripotent stem cells (iPSCs). iPSCs as the name implies are induced to become stem cells from a variety of other cell types, the most clinically relevant being skin and blood. While the debate rages iPSCs and their value for replacing non-working cells with new ones [regenerative medicine] one thing that is not in doubt is the power of these cells for modeling disease. iPSCs can be made from patient samples and then shared with other researchers. This may seem trivial but the more people looking at the same model the quicker the core problem can be found. If done right the sharing of the iPSCs to researchers who use different techniques (biochemists, molecular biologists, cancer, etc) will provide a 360 degree view of the disease. 

Update 2014: Unfortunately it seems that iPSC research is becoming marred with scandal. The new "most promising" discovering may be "less real" than one would hope.....Paul Knoepler has a blog on the subject. BTW if you have any interest in stem cells you should follow Knoepler's blog he is an excellent writer and a top notch scientist.

Update 2015: I think we are past the really bad period, unfortunately it has also diminished the enthusiasm for iPSCS as models. Although I am not surprised, it has recently been shown that iPSCs form different sub-types of cells based on their tissue of origin (see here for neural and here for heart). This seriously limits the usefulness of iPSCs for drug discovery and would just exacerbate the reproducibility issues that are plaguing science in general but particularly stem cell research. 

Once this happens it's likely that links will be found that can make drug discovery and testing palatable for biotech and big pharma. Drug discovery is expensive but if the community can gather enough information about the molecular and biochemical characteristics of rare diseases then the existing "orphan drugs" can be tested against the characteristics rather than any single disease. 

Update 2015: The orphan drug area is one where we are starting to see movement. The recent announcement by the CF foundation recieving $3.3B for the patent rights to Kalydeco. It is an interesting approach that should be considered by any rare diseases group looking to expand support and the potential therapies for their disease. 

As always the caution is who should get the money, how do you ensure that the cost of the drug to sufferers is appropriate? If the foundation funds the study (in part) do they have an obligation to ensure that the cost of teh therapy be reasonable to the average person?

The elephant in the room is of course paying for all of this. Scientists need to be able to publish to get grants to pay for post docs and reagents. While there is some money available from disease foundations but it doesn't cover all the costs that a lab needs to run. That is the job of the NIH. However their mandate really requires that grants are given out based on WIDE applicability of the research and the grantee's history of research in that area. Unfortunately this model does not serve the rare community very well nor does it foster the wide range of scientific endeavor. There hundreds of examples where a rare disease has lead to unique insight into a biological pathway that was key to some cancer or other disease. 

Update 2014: Rare disease research will survive but we need to start to fast track new funding models that focus on highly innovative projects. We know what hasn't worked we need some research that is different.

Update 2015: Unfortunately I can't say there has been too much movement on this. Frankly scientific funding is horrible right now. I think for rare disease foundations there is an opportunity to foster young scientists to be advocates and invested in their disease but this requires a new way of thinking about how to fund rather then WHAT to fund.