Wednesday, December 21, 2011

The Innovation Grinch

By Keith McDowell

Tis the season of good tidings and great joy as Americans take time to enjoy the holidays and ignore the doom and gloom of economic pundits, the gyrations and tepid growth of their 401-k accounts, the wacky and zany Republican presidential selection process, and our dysfunctional Congress that continues to kick someone’s can down the road. And whose can is it that they are kicking? Could it be the middle class and the 99%? Hmm! I must have missed the latest software release of the “kick the can” game.

And with every yuletide season, we have the inevitable villain – the grinch who steals Christmas. This year’s villain comes gift-wrapped and tied with a bow, just in time to put under the Christmas tree. What was stolen? The payroll tax cut extension! Who stole it? Congress! Yes, once again, Congress puts a lump of coal in our Christmas stocking and removes over $1000 in disposable money per wage earner for nearly 160 million Americans. But wait! Maybe there is still time to save Christmas. We’ll skip a vote in the House of Representatives and send the issue to a conference committee. That should get the job done. (I would attach an LOL at this point, but I’m still not completely sure what that acronym stands for.)

Newt Gingrich has it almost right, but he chose the wrong branch of government. Instead of federal judges and members of the Supreme Court, we should send out the federal marshals to arrest the members of Congress and put them in stocks along the federal mall for common citizens to walk past and express their displeasure. A brand on their foreheads would be even better! The original framers of the Constitution would certainly applaud and understand such old-fashioned treatment knowing full well that it was unconstitutional.

But is such scorn of our Congress and its members justified? Even worse, should we replace the Christmas Grinch with an Innovation Grinch?  Let’s revisit three key issues of interest to the innovation community to determine the need for this new creature: the American Recovery and Reinvestment Act (ARRA) commonly known as the stimulus bill, the 2012 Omnibus bill, and the potential for sequestration of federal funding.

According to the ARRA website, the stimulus bill has paid out $734.4 billion with $218.6 billion going to Contracts, Grants & Loans, of which $11.3 billion was expended on R&D/Science. An explicit breakdown by federal agency shows that the National Science Foundation, for example, reported $1.41 billion as being paid out. The website contains a map where the total number of stimulus research grants and related stimulus funding per state can be accessed for review. No matter the specific amounts or the distribution around the United States, suffice it to say that billions of one-time stimulus dollars have been and are being spent to stimulate research and development for the purpose of further priming the American innovation engine. But they are one-time dollars and therein lies the problem. Soon, if not already, universities will cut graduate student support, cut the number of postdoctoral fellows, and reduce the R&D staff by a very significant amount. That can’t be good for the economy or for innovation. Will the funds be made up through the normal appropriation process?

Two and a half months late and a few dollars short, Congress finally passed the 2012 Omnibus bill for this fiscal year’s federal budget. And just how many dollars short is it? While the NSF budget is up 3% and the Department of Energy E-ARPA program grows by 2.5%, the R&D budget for the Department of Defense was cut by $2.5 billion and the science and technology directorate of the Department of Homeland Security was cut by $140 million. All in all, the 2012 federal budget is not good news for innovation or for sustaining any momentum generated by ARRA stimulus funding.

How about future prospects for federal support of R&D and the innovation pipeline? A “Membership Advisory” email to me from the Director of Public Affairs at the American Physical Society tells the story and I quote:

Potential funding cuts will be triggered a year from now in the form of automatic across-the-board reductions – technically called sequestrations – mandated by the 2011 amendments to the Budget Control Act (BCA) of 1985. According to the amended BCA, the recent failure of the Joint Select Committee on Deficit Reduction to come to an agreement on a debt reduction plan, will initiate $1.2 trillion in sequestrations over nine years, beginning with Fiscal Year 2013. The effect on science funding is not yet known, since the sequestrations will apply to appropriations bills that have yet to be written.

And it’s not just a simple accounting of dollars and cents that matters! Congress seems bent on the destruction of the United States Postal Service and the removal of mail service on the last mile. What happened to the justification for rural electrification and even the Internet on the last mile as a competitive advantage? Doesn’t the same argument hold for the mail? Furthermore, how does blaming the messengers – civil engineers – help overcome the impending collapse of America’s civil infrastructure? And why continue to support the barrier of export control regulations as an old-fashioned Chinese wall when history teaches that fixed fortifications never keep the enemy out, especially against the modern onslaught of Chinese and Russian hackers?

It’s a game of uncertainty practiced for political and ideological reasons. It’s a rhetorical flourish designed to achieve a besting of one’s opponents. And it’s a dangerous and cynical exercise bereft of factual content sure to dampen innovation and further stall the American economy.

Yes, Virginia, there is an Innovation Grinch! And his name is Congress.

Wednesday, December 14, 2011

NSF Walks the Innovation Talk

By Keith McDowell

On 20 July 1969, Astronaut Neil Armstrong announced to an enthralled America:
“That’s one small step for [a] man, one giant leap for mankind.” His pronouncement of humankind conquering the moon in many ways ended the Space Race and brought about an era of space exploration and research unparalleled in history, although recent budget cuts to NASA and confused vacillations in America’s strategic plan for space potentially signal an end to our leadership.

Today, America faces the Innovation Race – the race to out-innovate our global competitors and continue our dominance in the global marketplace in the face of emerging nations and economies. While many lament the putative decline in America’s competitive advantage, especially as regards the condition of our innovation ecosystem, the National Science Foundation (NSF) potentially has taken “one small step” that will result in “one giant leap” for innovation in America. While others talk the talk, NSF has begun to walk the walk on innovation.

Responding to our national innovation angst and based on input from many sources including the Request-For-Information call from the Office of Science and Technology Policy, NSF recently reconfigured its “Partnerships for Innovation” program into a more nuanced umbrella program containing two components: Building Innovation Capacity (BIC) and Accelerating Innovation Research (AIR). The goal is to build innovation capacity through early support of the partnering of academic institutions with the small business sector and to accelerate innovation research by supporting existing NSF grantees that collaborate with third parties in order to move innovations to market.

BIC is an early-stage program in the discovery to marketplace pipeline designed to “stimulate the transformation of knowledge” obtained through discovery into “market-accepted innovations” via the “re-creation of single research platform” using connectivity to small businesses with the hope that researchers will become “agile in adapting their research for use in new applications” and that the transformed knowledge will serve diverse problem spaces of interest to the business world. The game is to create self-sustaining “research platforms” or “enabling infrastructure” that builds innovation capacity. At the core of BIC is a knowledge-enhancing-partnership (KEP) group that serves as a forum to churn ideas from all elements of the discovery-to-marketplace pipeline. Recognizing the all important need to manage intellectual property claims and rights, NSF requires an upfront Cooperative Research Agreement (CRA) by participants as part of any award.

AIR is a later-stage program designed to “spur the translation to transfer of fundamental research discoveries towards economic and/or societal impact” through commercialization into the marketplace while developing the entrepreneurial culture and strengthening America’s innovation ecosystem. The game is to bring together existing NSF-funded research alliances and expand connectivity to a broader community including business, venture capital, and other such entities. In many ways it embodies the “community of innovation” concept espoused by the Association of University Research Parks (AURP) and others while playing to the need to solve societal grand challenges and build regional innovation ecosystems.

All in all, it’s a worthy first step for NSF and one that I support, although one needs graphics and pictures, such as the one displayed in our byline, to sort through the complexities of the new programs. And therein resides the story of these new programs. Are they so laden with innovation jargon and government-speak that no one knows what the other person is talking about? What the heck is a “research platform” for goodness sake? Is it akin in spirit to the “weapons platform” lingo of the Department of Defense?

And more fundamental is the underlying theory of an innovation or commercialization ecosystem that underpins the call for proposals under these new programs. Exactly what are the feds at NSF thinking? What are their assumptions and the premises upon which they have structured their programs? A working guidebook to their theories and conceptual framework would be a useful addition and permit enlightened debate. America needs that debate.

But to be fair to the professionals at NSF, their proposal announcement does contain some glimpses into their underlying thinking. For example, recognizing that rapid product development is a reality, especially for the information technology sector, they note that discovery must be closely coupled to economic development and hence the creation of BIC. Their assumption (unstated) in some measure is that more connectivity and communication across the commercialization pipeline equals more innovation and more startup companies. Although it would appear to be self-evident as an assumption, it needs to be tested.

And then there is the assumption that more connectivity equals more collaboration. But exactly how do people connect and how do they communicate? I suppose such details are the essence of the “research platform” and the expanded structure of the research alliances.

But let’s be clear! These new NSF programs are a wonderful experiment to test and develop the efficacy of such assumptions. They are a fertile ground to posit, experiment, and understand one’s ideas of the functioning of an advanced American innovation ecosystem and to determine what really works in the early stages. And just how likely is it that NSF will succeed?

The current Request for Proposal states that 22 awards will be made for a total of $15 million total in both programs. Hmm. While a big step for the usually cautious NSF, by any other measure, it’s a tepid one causing one to wonder if there is a hidden politically-correct agenda at work. It would be easy to argue that NSF doesn’t really have a legal or mandated obligation to foster innovation, especially as relates to commercialization or economic development. And who will “win” the awards from such a small pool of funding? It will be existing converged infrastructures. Folks, that’s not a prescription for testing new ideas or a means to learning something new!

And let’s be frank! The concept of “capacity building” is not new at NSF or in the federal agencies. Been there and done that! It’s called EPSCoR (Experimental Program to Stimulate Competitive Research) – a program having a nearly thirty-year history whose purpose was and is to build research capacity at universities in states and territories that don’t receive their fair share of the federal research funding pie. It’s a rich history with many lessons learned and many successes and failures, even for the commercialization of university research. And that history should be a guide as we embark on BIC and AIR.

For example, as the former Director of the Alabama EPSCoR program, I can assure NSF that a two-year funding window for BIC and AIR grants is …, well, it’s a joke. Nothing really meaningful will be done in that period, although certainly money will be spent, advances will be made, and reports will be filed in a hurry-up manner. It takes five years! That’s a lesson learned from such programs.

But talking the talk had to end and taking a risky step had to begin. NSF has taken that step, albeit a small one. Could it be a giant leap for America in the Innovation Race? Will concept and talk become reality? It’s a gamble that must be undertaken. I applaud NSF for taking on the challenge.

The graphical image on the header of this article was obtained from slide 8 of an NSF Powerpoint presentation available on the Internet.

Thursday, December 8, 2011

Regulation is a Four-Letter Word!

By Keith McDowell

Are you one of those people who insist on driving over the posted speed limit, no matter the circumstances? How do you feel about regulations prohibiting cell-phone use while driving – not to mention the obnoxious restaurant patron blathering away in a loud voice next to you about the inconsequential trivia of his life? And then we have Rick Perry and the Republican presidential candidates, all “fed up” and bothered by “regulations” they claim restrict the growth of business and the formation of startup companies. Has the American enterprise system indeed become so constipated by regulations that innovation is squelched and only a dose of Ex-Lax or “deregulation” will cure the problem?  Since when did “regulation” become a four-letter word!

And how about our universities, the ultimate innovation engines of America? Have they also become choked by rules and regulations? A simple, but true story from my own experience as a vice president for research at The University of Alabama reveals the truth. Believe me, even Snoopy in his effort to write the ultimate heroic novel couldn’t make this stuff up any better!

It was a Wednesday morning at the president’s staff meeting before the Alabama homecoming football game when the announcement was made that a fraternity planned to host several elephants as part of their weekend activities. Of course, elephants are the Alabama mascot. My heart stopped! Do we have a protocol I whispered? “No! What’s a protocol,” was the reply. I panicked and announced we had to have an IACUC-approved protocol for the display of animals. “Make it happen” became the order of the day.

With the help of Dr. Marianne Woods, we contacted Washington to determine the best protocol for elephants knowing full well that PETA had launched a national effort to protest the treatment of confined elephants. No one in Washington had a clue as to the proper care and feeding of elephants. And how could we possibly pull together the membership of the IACUC (animal use) committee?  It couldn’t be done by the weekend we informed the president that afternoon.

But then Alabama football supporters intervened overnight and we were back in business on Thursday morning. Suffice it to say that IACUC met that afternoon and reluctantly approved a protocol submitted by the fraternity. Finding a veterinarian to co-sign the protocol on Friday morning (both our regular veterinarians were out-of-town) was an adventure unto itself. Of course, we got our pound of flesh from the fraternity. On Friday afternoon at 5 pm, all the fraternity members and their dates, decked out in their ballroom finery, were subjected to a short lecture by a scruffy biology faculty member on the proper treatment of elephants.

But it didn’t end there. During photo-ops with the elephants on Saturday morning (if you don’t believe my story, see attached photo of myself and Dr. Woods with one of the elephants), a distinguished and prominent elderly alumnus of Alabama was knocked down and received scratches when a fraternity genius insisted on having pictures taken with the elephant holding a football by his curled up trunk. The football escaped and the elephant tried valiantly to “catch” the ball, thereby knocking down the alumnus. Arriving at the President’s Box with a torn shirt sleeve and bloodstained arm, the alumnus subsequently refused to participate in the required inquiry. All in all, Alabama personnel spent over two years bringing the case to a resolution with all parties including the federal government.

It’s a funny story. Hey, I was worried that the female elephants might charge the football stadium when the famous trumpeting of the male mating call at the start of the game echoed from the stadium! But it’s a story that displays in the microcosm what universities experience every day, every hour, every minute, and every second. I often claim that universities are the most regulated enterprises in America. You don’t believe me? Then test your knowledge against the following abbreviated list of regulatory activities and acronyms: IRB, IBC, HIPPA, TAL, ETRAC, conflict of interest monitoring, radiation safety committee, adverse lab events reporting, data and records retention, time and effort, trafficking in persons, export controls and dual-use technology, controlled substances and CFATS, MSDS, secondary chemical labeling, responsible conduct of research, misconduct in science, facilities security officer, … , and the list goes on. For every item in the list, I have funny stories to tell including the glacial acetic acid gift that kept on taking instead of giving, Babe the goldfish, and Ralph the turtle. But, unfortunately, it’s not a joke or a funny story. Regulations are an integral part of academe. And they affect research, discovery, and the ultimate goal of innovation.

First of all, compliance with regulations consumes an enormous amount of time. Witness the two years spent dealing with the Alabama elephant issue. Imagine how much time it takes for newly minted faculty members to come up-to-speed with the daunting list of regulatory activities presented above – not to mention the amount of time actually engaged in satisfying the regulations. Literally hundreds, if not over a thousand, IACUC and IRB protocols are reviewed every year on most campuses.

Second, compliance with regulations leads to bureaucracy and what many consider administrative bloat at universities as the price tag for tuition outpaces salary growth and inflation to pay for the costs. Every university these days has a Compliance Office and officer. But as Representative Barney Frank likes to say, it’s not fat in the form of gristle on the edge of our steaks that can easily be cut off, but more like marbling, firmly intertwined in the process of doing business. So how do you like your steaks?

Third, the regulatory environment has continued to morph and evolve into a set of regulations so hyper-technical that universities must hire content specialists to deal with them, often one for every major area such as human subjects versus animal care versus export controls. Does the federal government really believe that carrying the newest laptop with the latest software to China on university business is a violation, especially when the computer parts were likely made in China and the software development was out-sourced to foreign nations? Or must we prove and establish for audit purposes that no federal grant dollars on any federal grant were used for trafficking in people, usually for the sex trade market?

So, let’s cut to the chase! Should government, whether federal, state or local, regulate our activities and are we as a society, including universities, over-regulated to the point of stifling innovation, job, and business growth? It’s a great question for America and one that should not become politicized and made a slogan as it is in danger of becoming in our current dysfunctional environment. In the macro-sense, it’s easy to debase regulations and bemoan their impact, elephant stories notwithstanding. But in the micro-sense when a specific family of regulations is examined in detail – such as “informed consent” for “experiments” on human subjects or even the care and feeding of elephants, one faces the old “wait-a-minute” moment – that moment when we face the reality of the need for regulations for our own protection and the protection of society as a whole. Yes, we need regulatory reform, but not the wholesale purging suggested by some. Regulation is a difficult issue that will require the best of us, but it’s not a four-letter word – at least, not yet.

Wednesday, November 30, 2011

Innovation Waits for No One!

By Keith McDowell

“Now is not the time!” Have you ever been given that line? Even better, did you know that the classical concept of “now” that we all live by was forever destroyed by Einstein in 1905? That’s right! Two events that are simultaneous or occur at the same time in your personal reference frame occur at different times for someone who is speeding by you in their car. Wow! Talk about getting a person to church on time as famously crooned and celebrated in My Fair Lady. Obviously, the incremental difference in “now” is so tiny that we don’t notice it – unless, of course, you’ve purchased for a Christmas present the latest in a near “light speed” automobile.

But then “time waits for no one.” Or does it? Can we slow down time by speeding away toward a distant galaxy? Well yes – relative to the reference frame of Earth, but it won’t change your personal perspective on the “passage of time.” And then there are those folks who race through life, hoping to slow down time and catch a few more moments. Good luck on that approach!

What about innovation? Does it wait for us to “find the time” or does it wait for no one? I think the latter. And therein lies a problem for America. We’ve created an innovation ecosystem with moving parts or processes that waste time checking for conformity to accepted norms or established patterns of behavior. It’s an authoritarian-gatekeeper system guaranteed for the most part to replicate the norm and produce “me too” research and incremental innovation. We like to pretend that it is an “open system” where discoveries and innovations constantly bubble up to the surface of our conscious “now” – a system where the best and the brightest quickly reach the frontiers of the creative mind through independent research and innovation. But the reality is often different. And at the heart of the problem on the discovery end of the creation pipeline is one of society’s and academe’s oldest control mechanisms: peer review.

What is “peer review” and how does it affect the innovation ecosystem? “Peer review” is a simple concept. The notion is that one’s performance, whether as an individual or a collection of individuals, should be evaluated by one’s peers. It’s a practice carried out routinely in our legal system using a jury of “peers.” In academe, the practice takes many forms including principally the following activities:

  • Refereed publications
  • Grantsmanship
  • Tenure and promotion
  • Post-tenure review
  • Program review

To the consternation of many of our best researchers, these activities have grown over the past decade or two to the point that they have pushed aside the time needed to think creatively and be innovative. As Daniel J. Meyer stated succinctly in an article from The Chronicle of Higher Education: “It’s getting impossible to produce my own work I’m spending so much time assessing others!” He further states that “I have many comrades (not ‘in arms’ yet, but it is coming) who are experiencing an unbearable overload of review duties. … Draconian measures, you say? Perhaps. But maybe this is a Drago we should embrace. If not, we are going (to) [sic] take an ailing peer-review system and kill it outright.”

These are strong sentiments and they are shared by many including the author. But can we document the reality in the hope of finding a remedy? It’s tough to do. For example, let’s examine post-tenure review.

Post-tenure review came into vogue in the late 1990s as an accountability or audit tool to satisfy politicians and legislators that someone was looking over the shoulder of tenured faculty members to make sure they continued to be productive following tenure. Typically, on a timescale of five to eight years, tenured faculty members prepare a massive dossier documenting their performance including student teaching evaluations. Often, external letters are solicited. Depending on the review, corrective actions might be taken including a change in teaching load, a reduction in research space, or a host of other such actions. It’s the tenure process redux. And in most institutions, the data gathering is now formalized through the maintenance of a yearly faculty activity report. Woe unto the faculty member who doesn’t login and update his or her data profile in a timely manner!

The demand for such performance data and accountability has become a battle cry for some elements of the right-wing conservative movement in America. The O’Donnell brouhaha in Texas comes to mind in that regard. But let’s be clear. While I support post-tenure review and the use of the faculty annual report, they represent a new element for the innovation ecosystem and they consume time – lot’s of it.

And then there are program reviews. Once again, the accountability and audit mentality dictated that university programs should be reviewed on a regular basis with a cycle time of five to eight years. Massive reports are created and external reviewers are conscripted – usually with the bribe of a stipend – to pass judgment on a program or department. Based on such data analysis, the Texas Higher Education Coordinating Board has determined that a number of “underperforming” physics programs should be shut down in Texas. Hmm, that should be a real motivator for poor and disadvantaged STEM students in those affected areas! Have we just turned off the next Michael Dell? Has their concept of “now” turned into “yesterday”?

Aside from the recent appearance of post-tenure review and program review, we’ve had in place since before World War II the process of reviewing research and scholarly manuscripts as a means to generate “refereed” publications. I’ve spoken to that issue previously. But what are the hard numbers? In Figure 1, I display the growth in the number of science and engineering publications using recently published data from the National Science Foundation. Over the twenty-year period from 1988 to 2008, the number of such publications nearly doubled and likely has now passed the one million mark per year. That’s a lot of papers to review for the science and engineering community!

Figure 1

With respect to grantsmanship and the peer review of proposals, the data appear to show some measure of saturation over the past decade. Using data taken from the annual Merit Review Reports to the National Science Board, I display in Figure 2 the number of externally reviewed proposals along with the number of distinct reviewers per year. Interestingly, the two numbers are approximately the same – one proposal per reviewer! One might argue that the past decade has shown a crossover in number of proposals versus number of distinct reviewers, but it will take another decade to prove this assertion, if true.

Figure 2

The number of distinct reviews for the same time period is shown in Figure 3. Again, not much growth has occurred and there are fluctuations.

Figure 3

A detailed examination of the NSB reports seems to indicate that there is a small trend toward fewer reviews per proposal. Based on these hard data, one cannot conclude that peer review of proposals has significantly increased as a burden over the past decade. Instead, it appears to be a saturated situation. But it still consumes time and is based on proposed research, not performance. I’ve addressed that issue and its effect on innovation elsewhere.

While peer review is firmly ingrained in the American innovation ecosystem, it’s time to understand how we use it and whether it truly is the wisest course of action as we enter the era of global competition.  Now is the time for America to come to terms with peer review, lest our competitors move faster and push our “now” into their “yesterday.” Innovation waits for no one.

Tuesday, November 15, 2011

EXCUSE ME! Your Microscope Is Out of Focus

By Keith McDowell

So how many hours a day do you spend on social networking? Do you tweet, text, or use email services? How many computers, tablets, cellphones, and other wireless devices do you own? Have you recently updated your Facebook and LinkedIn profiles or added new “friends” and connections? Let’s face it folks. Social networking in all its various forms is an exploding new phenomenon, rapidly penetrating all levels of society and creating new channels of rapid communication. Does anyone doubt that the movement “Occupy Wall Street” or the occurrence of “flash mobs” would exist without social networking?

But is “social networking” something that should be studied, researched, and understood through funding by the National Science Foundation (NSF)? Or should such research be assigned a “low priority” having little or no benefit for society and America. How about understanding terrorist crowdsourcing and other cyber threats to national security played out using social networking? Are they not important subjects to be understood?

And then there is the ever present fruit fly – a real irritant to social conservatives and those who see waste in the federal funding of research. Does anyone really care whether the design of fruit fly genitalia affects their ability to “hook up” and copulate? Of course, it’s not a topic that keeps me awake at night, absent a fruit fly infestation in my home. But I respect the judgment of experts in the field that such research is important.  As Forrest Gump said about a box of chocolates: “You never know what you’re gonna get.”

Let’s be clear. Picking winners or losers in advance in the game of discovery and innovation is mostly a waste of time. It’s not an issue of defunding “whimsical” research, whatever that is. And who determines the winners in advance? What are the criteria? Would you have picked social networking as a multi-billion dollar industry before the fact?

Unfortunately, some people in America choose to take legitimate concerns about what research should be funded, what metrics should be used, whether the processes currently used are appropriate and sufficient, whether waste and fraud are rampant or not, and what is America’s strategic endgame and attempt to use the scientific illiteracy of many Americans coupled with extreme and often counterfactual social conservatism to achieve political gains at the expense of discovery and innovation in America. Such an exercise was recently conducted by Senator Tom A. Coburn in his report of April 2011 entitled The National Science Foundation: Under the Microscope.

Americans including me support a balanced budget. But is doing “more with less” to the point of starvation a realistic and appropriate goal for our country, especially in the era of global competition? Let’s examine the facts and the history that led to the Coburn Report.

Recognizing that America gives every indication of falling behind in the game of competition in the global marketplace including the loss of jobs, Congress passed the America COMPETES Act (Public Law 110-69) calling for a doubling of NSF funding over seven years. Passage of the Act was the culmination of many studies including Rising Above the Gathering Storm and a clarion call from leadership in nearly all segments of American society. But according to the Coburn Report, the “dramatic increase in spending passed with little debate or dissent.”

The report further challenges whether increasing the NSF budget “to bolster our economy” is a magic bullet. Instead, the report purports to document widespread fraud, waste and abuse of the taxpayer dollar through funding of wasteful and controversial projects of limited scientific benefit, excessive amounts of expired funds, inadequate contracting practices, lack of accountability metrics, excessive funding of conference and related travel, duplicative funding with other government agencies, inappropriate behaviors, and lack of transformative research, to name some of the report’s assertions. These are serious charges and they must be taken seriously and dealt with appropriately independent of one’s political persuasion or the underlying belief system and principles that support the characterization and interpretation of the facts in the Coburn Report.

While I applaud Senator Coburn for engaging the debate, I strongly and emphatically disagree with the both the specifics and the intent of the report’s recommendations. Implementation of the recommendations as structured will play only at the margins and will assuredly dampen both discovery and innovation in America. But we have some common ground! From my perspective, Coburn and his staff put the NSF under a microscope that was out of focus. So let’s review the recommendations of the Coburn Report and bring those recommendations properly into focus.

Establish Clear Guidelines for What Constitutes “Transformative” and “Potentially Transformative” Science: Good luck with such guidelines! Picking winners before the fact of becoming transformative is a useless exercise. I repeat: would you have chosen social networking as a winner? I put little stock in those who say they know it when they see it before the actual outcome. Discovery and innovation are mostly serendipitous exercises where the accumulation of sweat equity through the funding of putative non-transformative research and even “whimsical” research is essential. Making this argument is not to say that we should not have targeted research. Grand challenge research must be an essential part of the portfolio of funding and our Nation’s discovery and innovation strategic plan. But we should not eliminate or throttle exploratory research and innovation based on political or personal bias. Furthermore, the implied threat to be transformative, creative, innovative, or ELSE never works. The user-friendly mantra of Go Forth and Innovate! is the appropriate strategy.

Set Clear Metrics to Measure Success and Standards to Ensure Accountability: The STAR METRICS program is a worthy federal attempt to achieve this desired outcome and is supported by the Coburn Report and by the university community. Accountability has always been an integral part of the federal funding process. But one must remember that discoveries and innovations are not part of a programmed assembly line easily amenable to accounting and audit in the traditional sense. The debate as to what constitutes appropriate metrics for both research and innovation is ongoing and lively. It is by no means a settled matter as emphasis on the commercialization of university research and the need for a growth in American jobs dominates the discussion.

Eliminate NSF’s Social, Behavorial, and Economics (SBE) Directorate: Simple response: emphatically NO! We live in a world dominated by convergence and network science where What Is Easy Has Been Done. Transformative discovery and innovation will occur at the boundaries and overlap of the physical, biological, and social dimensions of our universe. Enough said on this recommendation!

Consolidate the Directorate for Education & Human Resources: With at least 100 STEM education programs and maybe as many as 200 spread across numerous federal agencies, we have a problem that needs immediate attention. I defer the reader to Go Forth and Innovate! for a full and complete discussion of this recommendation from my perspective. Suffice it to say that I agree with the Coburn Report that we must come to terms with which federal agency should take the lead in funding the STEM education agenda for America.

Use it or Lose It: NSF Should Better Manage Resources It Can No Longer Spend or Does Not Need and Immediately Return $1.7 Billion of Unspent, Expired Funds It Currently Holds: The Coburn Report represents that “[a]pproximately 47 percent of the 151,000 final and annual project reports required in the past 5 years were submitted late or not at all.” Furthermore, “The agency’s record of failing to place an emphasis on closing out expired grants and returning unused funds to the United States Treasury raises question [sic] about the overall fiscal management of the agency.” The Coburn Report concludes that “grant oversight remains as an ongoing management challenge at NSF.” I agree! There is no excuse for failing to file a final report and reprobates and their institutions should be punished in some manner. But to adopt the rather simplistic characterization of this issue as taken by the Coburn Report is not the answer. As one who has managed multi-million dollar grants, I can assure the public that fiscal management of grant dollars is a challenge complicated by personnel timeline management, academic schedules and a plethora of other complex factors such as on time delivery of needed and purchased equipment and the effect on completing the research project. NSF must have the ability to be flexible in this regard and to carry over unspent funds. The notion that NSF has $1.7 billion in available funds is na├»ve at best.

Reduce Duplication: Develop a Strategic Plan to Streamline Federal Research and Development: I agree in principle. We need a national debate about how we fund R&D and innovation in order to form a better strategic plan. However, duplication in and off itself is chump change in the larger arena and should not be the dominant factor.

Provide the NSF Inspector General Additional Resources and Place a Greater Emphasis on the Office of Inspector General’s Findings: From my experience, NSF and academe place a great deal of emphasis on the OIG’s findings. Some would argue we place an obsessive emphasis on them. Indeed, over the past decade or two, a vast bureaucracy has grown up to deal with the growth of federal rules and regulations and their interpretation. It has become a hyper-technical world where subtle nuances of the meaning of words make a difference. Do we really want further government intrusion into the business of federal funding of research? At some point, a proper cost-benefit balance must be struck. I submit we’ve already reached and perhaps surpassed that point. Further growth in the “accountability culture” will only stifle discovery and innovation and not achieve the desired end result. In that sense, I support the “deregulation” platform propounded in the political arena. But if more “resources” are to be poured into the OIG, I have a simple request. Hire the best and the brightest at a competitive salary. Far too often, university professionals sit across the table from OIG auditors and inspectors who would do well auditing Walmart but know almost nothing about the complexities of the federal funding of research.

Senator Coburn, sharpen the focus on your microscope and take out the fuzziness caused by political and personal bias. Starving discovery and innovation in America because of perceived and even real issues at the National Science Foundation are not the answer. Nor is highlighting and listing research programs that don’t fit your worldview. It’s time to move past an obsession and annoyance with fruit fly research and join with those of all persuasions to forge and craft a new strategic plan for R&D and innovation in America. 

Wednesday, November 9, 2011

Are We Too Pooped To Pop?

By Keith McDowell

Do you need one of the modern energy pick-me-up concoctions to make it through the day? Do rolling power outages from a record number of days with temperatures over 100 degrees interrupt your lifestyle? How about the drain on your cellphone battery from too much texting, tweating, emailing, and gaming? Is the high price of gasoline driving you to consider purchase of a hybrid or an electric car? How about global warming? Are you one of the many people suffering adverse consequences from the effects of severe weather events? If so, join the crowd! It’s a world gone crazy as we find ourselves “too pooped to pop” and “too old to stroll” as Chuck Berry famously crooned.

But wait! Are we really running out of readily available energy or does it just seem that way at times? Have we become so dependent on rapid access to energy through our high-technology gadgets and our American lifestyle that the slightest interruption portends an energy crisis? Are the lights really dimming in America, the incandescent bulb controversy notwithstanding?

Like many, I thought for many years that we had a looming energy crisis, both in the production of electricity as available electrons and the production of fuels in the form of gasoline, home heating oil, and natural gas. I was certain America was in trouble, not because of dysfunctional government, energy policy, or lack of political will, but from fundamental scientific issues and some basic facts.

To begin with, fossil fuels have a limited lifetime of perhaps a century – quibbling about the exact time frame is a stupid exercise, although fracking and other new discoveries and techniques help to extend their contribution as an energy source. But the book Out of Gas by David Goodstein and the existence of Hubbert’s Peak convinced me that the rate of fossil fuel consumption would eventually pass its rate of production. And the issues of human-driven global warming and adverse climate change as by-products of the use of fossil fuels along with the concomitant increase in carbon dioxide emissions were real show-stoppers for me. America needed to find a way to slow the use of fossil fuels.

The short-term solution appeared to be nuclear power, although three-mile island, Chernobyl, and now the Fukushima incidents demonstrate that real and long-term issues exist for the industry – Mother Nature being the strongest protagonist through earthquakes, tsunamis, and the gradual leakage of toxic and radioactive wastes into the biosphere. But as a physicist, I accepted these risks and knew that they could be mitigated.

Hydropower, geothermal, and biomass alternatives didn’t compute in either the short- or the long-term for me for many reasons including scale and basic economic considerations. Although wind power continues to demonstrate its efficacy as an alternative source for electrons, it will never answer the full projected demand curve for electrons nor solve the need for fuels. It’s not the ultimate or even the short-term solution.

The long-term solution seemed simple to me, but scientifically and technologically challenging. We needed fusion power! We needed sexy, high-technology physics projects like the National Ignition Facility to unlock the secrets of Mother Nature and to turn on the Sun right here on Planet Earth. Advancing our understanding of fusion power would culminate in a limitless energy source to do almost anything we wanted to do. And it would push forward the frontiers of science to boot! The scientist in me was thrilled at the prospects of yet another triumph of humankind over nature. But then I had my epiphany! That moment when you realize how stupid you’ve been and that the solution has been in front of you all along. It’s called solar energy!

My personal journey to the realization that solar energy is the solution began in the period of 1979 and 1980. After reading the 1979 American Physical Society study entitled Solar Photovoltaic Energy Conversion by H. Ehrenreich – I’m one of the select few who actually read the document from cover to cover – and listening to the discourse at the time about photovoltaics, I became convinced that environmental issues surrounding the large-scale mining of the exotic metals needed to produce photovoltaic devices, the exorbitant costs, and the “low technology” flavor of solar panels mitigated against solar energy as a solution.

But then I read the wonderful article by George Johnson in the National Geographic for September 2009 entitled Plugging into the Sun that summarizes the current status of the solar-power industry around the world. Two important factors finally dawned on me. First, the flux of solar photons onto the Planet Earth is enormous. Depending on one’s favorite metaphor, the Earth receives “more energy in one hour than the world used in one year” in 2002. [Wikipedia] Translation: We already have a fusion source of energy that provides a nearly unlimited supply of free, convertible photons. Second, the technology needed to convert photons into electrons or electricity already exists and the market forces are rapidly making the production and sale of solar panels for homes or buildings cost competitive. Furthermore, we already know how to construct and utilize giant solar power plants. Translation: solar power is not a scientific or technological challenge.

But this begs the question: why is America not jumping with both feet onto the solar energy bandwagon? We want energy independence. Why not solar, especially given that it is a “solved” solution? Thomas L. Friedman in Hot, Flat, and Crowded has opined more broadly on the issue advocating for market forces to drive innovation across the spectrum on possible energy sources including solar. Even as I write this article, an email has arrived in my inbox with a Paul Krugman editorial entitled Here Comes the Sun in support of solar energy and pointing out the rapid acceleration of the sector off most people’s radar screens – the Solyndra story notwithstanding. So what are the pitfalls for solar energy and what about other alternative energy sources both for the production of electricity and for fuels?

One unspoken issue is that we might converge too quickly onto a specific solar energy industry using inferior technology. It’s both the “low tech” issue and, more importantly, the sunk cost issue that plagues the nuclear power industry. Many believe that the prevalent reactor design was chosen too quickly and is sub-optimal. Is the same thing happening to the solar industry?

And then there are all the usual technological and economic issues including the location and viability of American transmission lines – we need a smart grid, energy storage during off hours, distributed generation using local solar panels versus large-scale solar collection plants, and the associated costs of building out the infrastructure. But these are old issues and we have to deal with them independent of choosing solar energy. Indeed, we do that every day. They are not show-stoppers for solar energy.

How about environmental issues? No one should be fooled. There is no such thing as “clean” or “green” energy, alternative or not! Every source of energy carries a burden whether in the production of the materials – think mining – used for the infrastructure or the process itself – consider the environmental issues of solar plants in a pristine desert. Even algae, a potential source for fuels and carbon feedstock, must be “fed” by phosphate salts – or the equivalent – taken from the earth. In the end, it is a trade-off.

And what about the issue of solar energy only producing electrons, not fuels? In my opinion, this is the true insertion point for innovation. What would we do with a large excess of electrons? Would we “flare” them off as we often do with natural gas? Or would we design ancillary systems to absorb and use them in creative ways? With enough electrons, one can convert lots of different materials into fuels. If we no longer need coal to produce electricity, we can extend its lifetime as a feedstock for fuels using electrons as the energy source for the conversion. Even better, we can invent new technologies for converting biomass into feedstock for the chemical and plastics industry instead of the ultimately futile game of converting biomass into fuel. And we can slow down the use of fossil fuels, thereby improving our biosphere and reducing global warming. Yes, we need more innovation through research and development including even studies on fusion energy.

But do we have an energy crisis founded on scientific issues or basic natural facts as I originally thought? Are we “too pooped to pop?” Emphatically, NO! What we have is the lack of societal and political will to use and turn the levers and knobs available to us to effect the transformation to a solar economy. Mercifully, as suggested by Krugman and others, there are positive signs that change is afoot. I’m not an expert on the subject, but federal loan guarantees seem to be working in California. Tax incentives for the installation of home solar panels are essential. We should pursue any and all avenues to stimulate the solar transformation. In short, what we need is a well-constructed “Solar Electrification Program” similar in spirit to the original Rural Electrification program used during the Great Depression. We need to support the Solar Energy Industries Association as a counterpoint to government. We need to understand that solar equals jobs and jobs now! With a clear understanding of where we are at in the space of solar power and the bigger space of the energy crunch, we can innovate and replace the current growth in short-term solar installation and construction jobs with longer-term high technology jobs.

Are we too pooped to pop? I think not. It’s time for America to “pop” and go solar.

Tuesday, November 1, 2011

Free Agency: It's a Bad Idea!

By Keith McDowell

President Obama’s Council on Jobs and Competitiveness released an interim report in October entitled Taking Action, Building Confidence.  The recommendations and actions proposed in the report comport with conventional wisdom as to what the Nation needs to be doing to accelerate the growth of jobs in America, especially through innovation and the creation of small startup companies. And due diligence is given to the role played by the commercialization of university research through entrepreneurship and technology transfer and the need to enhance such activities. All would be well with the report from my perspective in terms of the recommendations for universities except for one thing: they got it wrong!

I suppose “getting it wrong” can be rationalized given the makeup of the Council. There is not a single representative from academe other than the Broad Institute at Harvard and MIT and no one knowledgeable in the practical aspects of the commercialization of university research serves on the Council. 

So what is it that they got wrong? In Initiative 2: Nurture the High-Growth Enterprises That Create New Jobs, one of their recommendations on page 22 is to “allow university faculty to shop discoveries to any technology transfer office.” It’s also referred to on page 21 as an “open-source” approach. Originally proposed by the Ewing Marion Kauffman Foundation as the “Free-Agent” or sometimes the “Free-Choice” model, it has been strongly, emphatically, and universally rejected by the university community as a very bad idea with many flaws that will not improve the commercialization of university research, but have the opposite effect: slow it down! At a seminal meeting in February of 2010 hosted by the Council on Governmental Relations in Washington, I presented the arguments against the “Free Choice” model and my speech and powerpoint presentation are available. The Association of University Technology Managers and others have taken equally strong positions against this model.

But what could possibly be wrong with “open sourcing,” “free agency,” or “free choice?” It sounds like motherhood, apple pie, and the American way. Absent other information, it is easy to become trapped in these rhetorical flourishes and trapped by the claims of the Kauffman Foundation and their supporters at the Department of Commerce that they have “studies” demonstrating the need for such a model. Here’s the reality: there are no such credible studies! Show me the credible studies and I’ll happily review them.  What we have are only collected hearsay and the whisperings of disgruntled innovators and entrepreneurs who had their bad experience. No credence is taken of the vast majority of commercialization activities successfully conducted every day by competent people all across the spectrum of American universities. No credence is taken of the input and rejection of this model by university experts with decades of experience. No credence is given to the rapid expansion and transformation underway at universities in the commercialization of their research.

But let’s be specific for a moment and list some of the principal arguments against the “free agent” model. Additional information and further clarification can be found in the COGR material.

  • The concept didn’t work before Bayh-Dole.
  • The concept hasn’t worked internationally.
  • The concept hasn’t worked well in joint university experiments and some have dropped combined operations.
  • The approach will significantly slow down commercialization due to a) the complexity of multiple inventors and technologies, b) the complexity of funding sources for most research, 3) the complexity of having multiple managers across different universities, 4) the balkanization of faculty IP, and 5) the tangled legal obligations with concomitant legal and financial liabilities.
  • Faculty conflict of interest through financial interest in license and startups.
  • Emphasis on personal benefit to faculty members over societal benefit
  • Lack of practical experience in commercialization by faculty and their lack of available time to pursue a long-distance relationship.
  • Problems with faculty conflict of commitment with respect to outside agencies.
  • The concept ignores the investment of a faculty member’s university in the inventions.
  • There is no mechanism to bear the cost of free-agent commercialization – inventors don’t have the money and the home institution isn’t going to pay someone else the full cost.
  • Home institutions will be reluctant to risk money on IP managed by another institution.
  • Potential emergence of third party licensing entities requiring profit or sustainability, thereby driving up the costs.
  • “Cherry picking” of IP.
  • Problems with one university committing incredibly tight resources to another – not going to happen!
  • State law: Texas requires fair value for IP.
  • No evidence the model would improve more “modest” operations.
  • Technology transfer offices (TTO) believe it to be an inappropriate allocation of their resources and an inappropriate relationship.
  • Harm to the faculty-TTO relationship.

And these arguments are just the start of the problems for the free-agent model! In short, pursuing this model will waste a lot of valuable time and resources trying to “fix” all the problems with it. And guess what! Even if it is made to work, it will have a minimum to negligible effect on the speed or quality of the commercialization of university research to the marketplace. There will not be an explosion of discoveries or inventions suddenly being brought forth. There will be no explosion of new jobs! Do we really want to do this to ourselves?

Even at its best, the commercialization of university research is a “contact sport” requiring the building of relationships among many stakeholders. Those relationships take time and don’t work well as a long-distance marriage. Disputes and disagreements within such relationships should not become fodder for promoting a concept that is doomed to failure from the outset. 

But most insidious to me in the “free-agent” model is its failure to understand, promote, and accelerate the single most important step that we need to take in America: the creation of communities of innovation. We should not be running to MIT, Stanford, or some other presumed bastion of the all mighty to enhance commercialization – my apologies to my good friends at MIT and Stanford. We should be building regional innovation ecosystems similar to the ones springing up all around America using best practices from our peers such as MIT or Stanford. That means doing the hard work of improving our local university technology transfer and commercialization infrastructure as I’ve advocated throughout my articles and as other leaders, such as the Association of University Research Parks and AUTM, have advocated. There are many extant great ideas for improving commercialization such as “proof-of-concept” funding and innovation centers. Free-agency as recommended in the Council’s interim report is a bad idea and a waste of time and resources. I strongly urge the President’s Council on Jobs and Competitiveness to remove this recommendation and replace it with ones that will have the desired outcome: jobs and prosperity for Americans.

Tuesday, October 25, 2011

What Is Easy Has Been Done

By Keith McDowell

It used to be so easy. The “lone wolf” researcher observed natural phenomena and collected data using homemade equipment. Or perhaps those with a theoretical bent puzzled over data and speculated on a new theory using only pencil, paper and their native intellect. Einstein became the poster child for the iconoclastic scientist with his unkempt appearance and penetrating, but friendly, eyes. Rarely did a polymath appear able to leap over discipline boundaries ala Superman. Such nimble gymnastics mostly weren’t needed.

But then the Twentieth Century arrived with an exponential explosion of science, engineering, and technology. Disciplinary research boundaries collapsed as interdisciplinary became the buzzword of the middle part of the century followed by multidisciplinary and now transdisciplinary in the first decade of the Twenty-First Century. The “lone wolf” or individual researcher was overrun by teams, research centers and institutes, national laboratories, industrial R&D laboratories, and now “lablets,” innovation hubs, and innovation centers. The century of the physical sciences was replaced by a spurt in the life sciences.

The structure of funding for research moved from potentates and personal donors to industry and government while the nature of the funding shifted from pure basic research with scientific significance as the principal measure for funding to use-directed research with “broader impact” – often under the umbrella of grand challenges – as a significant metric. Of course, what constitutes “broad impact” or “relevance” as it is sometimes known is mostly in the eye of the beholder. Some would even argue that the skill of grantsmanship supercedes the natural research ability of researchers when it comes to promotion or tenure. Certainly one’s record of grantsmanship is equally as important as one’s publication record, almost independent of the quality of the research.

And then we have the phenomenon of “relevance” and “broad impact” being overtaken and encompassed by the newest trend: the commercialization of university research and the desire to include innovation, commercialization, and entrepreneurial metrics as measures of faculty productivity. Even teaching with the advent of a multiplicity of “learning styles” and the concomitant introduction of many new advanced technologies for delivering content has not been immune to transformational change – not to mention the rapid expansion of the knowledge frontier and the race to keep up in lecture content and textbooks.

From another perspective, the collapse at the end of the Twentieth Century of the meso-scale or nano-scale barrier that bridges atoms and molecules to the micro-scale along with the parallel growth and ability to attack biological systems brought about a new research concept or paradigm: convergence. Convergence was celebrated by a new acronym “nbic” which stands for nano-bio-info-cogno. Later, the letter “e” was added at the end to include “eco.” Personally, I prefer to rearrange the letters of the acronym to “bnice.” Somehow, the phrase “be nice” sounds better than the geek speak “nbice.”

Coupled to the concept of convergence was the equally important, if not more important, concept of “complexity.” Complexity is in some sense a measure of the connectivity of knowledge or networks. Together, convergence and complexity along with other related events led to the creation of network science, an approach to parsing phenomena into three categories: physical, biological, and social. Network science entails a systems view of the world with layered architectures as the dominant structure and “emergent phenomena” occurring in the higher-tiered layers. Life itself is considered an emergent process in the macroworld which itself is built upon the micro-, nano-, and atomic and molecular layers. Ray Kurzweil, the futurist, postulated in his book, The Singularity is Near, that the complexity of computers is poised to surpass that of the human brain and that computers will soon become “self-aware” as an emergent phenomenon. What will happen to humankind as such self-aware computers become exponentially brilliant and able to assess all known knowledge at nearly the speed of light?

Will scientific research ever get a pink slip? John Horgan in his book The End of Science would make us believe so. Convergence, network science, and complexity theory might lead us to think so as we pull together all branches of science into the final grand frontier. It is an interesting debate, best left for now to the coffee klatch and student debate. It certainly is the case that full access to the nanoscale and BNICE convergence have brought about the social phenomena of self-assembly of STEM and health personnel into teams taking on societal problems. Furthermore, global competition and the resulting explosion in the commercialization of university research have taken us to the transdisciplinary age with STEM and health teams joining forces with business and legal teams as well as those who understand the social dimensions to ensure prosperity for Americans. As I like to tell my colleagues in all fields of endeavor, what is easy has been done! Get used to it!
What are the implications for this massive paradigm shift in how we do research, keeping in mind that the scientific method per se has not changed? Indeed, ARE there any implications, especially for the innovation ecosystem? Certainly, we must at a minimum be aware that a transformation has occurred however one chooses to characterize it. In my experience, far too many people simply don’t get it and merely view what is happening as a treadmill dialed to a faster speed. While that is true, it is only a fraction of the real story.
In other articles and in Go Forth and Innovate, I’ve addressed in my opinion some of the changes that need to occur in grantsmanship, publication of research, peer review, research compliance, and a retuning of academe to replace the service function with a community engagement function that includes all aspects of innovation and entrepreneurship. Each of these areas and many more deserve a detailed review. But it is important that such reviews and the subsequent changes that are made be understood within the broader context of how the scientific endeavor has changed. It is important for all to understand that what is easy has been done.

Friday, October 7, 2011

Who's in Charge?

By Keith McDowell

Admit it! You always wanted to be a rock star belting out tunes while adoring fans groveled in the mosh pit. Or maybe it was a movie star surrounded by great actors and with an Oscar to boot. Of course, the svelte look of a fashion model always danced before your eyes as you stared at your reflection in a mirror. And then there were those of you who aspired to be top dog in the commercialization of university research … we pause to reflect on this dissonant chord and to paraphrase a hit tune by Nancy Sinatra: were those boots really made for walking?

No one grows up wanting to lead a university office of technology commercialization – well, maybe there are a few hardy souls out there with vision. Indeed, such a career path didn’t even exist in my youth! But exist it does and it’s one of the top new professions in America demanding the highest of skills. Unfortunately for universities, it’s a profession that has grown up so quickly that we face a dearth of top-quality people to fill the rapid growth in positions. How did we get to this situation and what does it mean to be a leader of technology commercialization at a university?

The story begins with the Bayh-Dole Act of 1980. The act permitted universities to elect to pursue ownership of an invention developed from federal research grants in preference to the government and to actively commercialize the invention. Prior to Bayh-Dole, few universities engaged in the transfer of technology to industry and the process was limited principally to a licensing function. The offices responsible for that function became known as technology transfer offices (TTO) and the staff as technology managers. There were exceptions such as the Wisconsin Alumni Research Foundation, created in 1926, that presaged the future.

Following Bayh-Dole, most universities were rather slow to respond and their focus was almost exclusively on technology transfer, not commercialization. Technology managers, typically lawyers, were hired or untrained staff were coerced into that role. Organizations such as the Licensing Executive Society (LES and founded in 1965) and the Association of University Technology Managers (AUTM and founded in 1974) grew in prominence and a new and important professional career path opened up in universities with a specific focus on technology transfer as opposed to commercialization.

TTOs in many universities at the end of the Twentieth Century were back offices reporting up the channel to vice presidents for research and often consisting of a very small staff. Their functions included the following activities:
  • Invention disclosure
  • Valuation
  • Protection as intellectual property (IP)
  • Formation of business plan
  • Marketing
  • Licensing
  • Asset/portfolio management
The director of a typical TTO dealt mainly with faculty, the USPTO, and with licensees of their IP. It was an important job with definable activities that encompassed a reasonable skill set.

Several emergent forces upset the TTO applecart. First was the growing need for university incubators to house start-up companies created by faculty and subsequently by more complex alliances of people. University incubators demanded leaders and directors with skill sets appropriate to their management and with the vision to function in an ever changing landscape. Even the putatively simple act of forming a university incubator is not straightforward.

Second was the push for regional economic development or “eco-devo” as some called it. The eco-devo push was often ill-defined and was thrust upon vice presidents for research to manage and to make sense of. No longer was an appearance at dog and pony shows designed to attract leading industry to a community sufficient. Demands grew for universities to engage with regional communities to help them grow their own economies. If Silicon Valley could do it, so could main-street America. The theme was endlessly repeated without a clue as to what should be done. And who at a university was going to do whatever it was that needed to be done?

Next was the realization that we live in a globally competitive world with other nations seemingly racing faster than America to create ecosystems supportive of startup company formation at the frontiers of research and development and supportive of the commercialization of new IP. Accelerating the commercialization of university research became a new goal for universities. Pressure was put on TTOs to “perform” and metrics were perused. Critics and many pundits interpreted the data as demonstrating underperformance and the need for a new paradigm. The notion of “proof of concept” funding emerged as a tool to achieve acceleration as well as a tool to realize the commercialization potential of otherwise dead or undeveloped IP. But who should manage and distribute such funds? Was this a new function for TTOs?

But processing of ideas, discoveries, inventions, and the eventual IP faster and better into the commercial marketplace isn’t the final answer or the end of the story. America also recognized that we must innovate faster and better than everyone else. I’ve long advocated for the creation of innovation centers at universities fully engaged with a regional community of innovation. And that system of innovation at universities must be tightly coupled to all the processing and commercialization functions alluded to above. But who’s in charge? Who manages such a diverse portfolio of functions? Ultimately, it resides with the vice president for research, but it also requires an ancillary high-level professional skilled in the multi-faceted aspects of innovation and the commercialization of university research.

Initially, TTOs responded to the transformational paradigm shift by the oldest of tricks. They added a commercialization function to their office and changed their name to office of technology commercialization (OTC). Recognizing the need for professional certification, a Certified Licensing Professional (CFP) program began in 2008 under the initiative of LES. Such endeavors are worthy and must be done, but are they enough? Do they truly recognize the transformational change that is underway? Do they recognize the need to have an integrated response to technology transfer, technology commercialization, regional economic development, innovation, or any other buzz word you want to add to the list? Are we doing what needs to be done?

The emergence of the commercialization of university research under the umbrella of an innovation ecosystem is a fascinating story and one that I’ve only briefly reviewed. It is a phenomenon that is in a state of flux as we leap up the transformation S-shaped curve and head for the next plateau. Exactly what will emerge systemically is up for debate, but one thing is clear. Someone must be in charge! And that someone will need to be a professional with a variety of non-overlapping skill sets and strong interpersonal skills to navigate around faculty, university administrators, lawyers of all stripes, entrepreneurs, investors, politicians, and even the world of critics and pundits. I don’t claim to have the answers as to how such a profession should be structured or even as to how universities should approach dealing with the diverse functionalities that would require such leaders. But I do believe that universities and those who speak for innovation in America need to address this issue immediately. Someone who knows what they are doing needs to be in charge of the overall innovation system at a university. Defaulting to an untrained vice president for research or to a siloed structure is not the answer.