Sep 29, 2014
Sep 28, 2014
Sep 27, 2014
Sep 24, 2014
Boeing and Liquid Robotics today announced a partnership to make water-borne robots that can handle a variety of surveillance jobs, ranging from hunts for submarines to the detection of drug traffickers.
Silicon Valley's Liquid Robotics is the manufacturer of the Wave Glider SV3, a $300,000 self-powered, seafaring data center that offers customers -- until now, mostly researchers and marine industry companies -- tools for investigating the open seas for months at a time. SV3s have a hybrid propulsion system that can drive the robot with either solar or wave power. Boeing is the world's second-largest defense contractor.
The new deal is aimed at augmenting Boeing's existing maritime surveillance systems -- airplanes like the P-8 submarine hunter and the Maritime Surveillance Aircraft -- with autonomous devices that can monitor the seas around the clock.
The goal of the partnership is to provide Boeing's customers with "the missing link" in a collection of tools that can now span from undersea depths into space, according to Gary Gysin, CEO of Liquid Robotics. The deal, likely to be worth many hundreds of millions of dollars, "makes the company," Gysin said. Among the many options the substantial new revenue gives Liquid Robotics is a possible future IPO, Gysin added.
Liquid Robotics first introduced the Wave Glider in 2011. Depending on the sensors deployed on the maritime robots, they can monitor large areas of the sea at the surface and can detect acoustically down to depths of 8,000 meters.
The Boeing partnership is the company's second major deal. In 2012, Liquid Robotics teamed up with Schlumberger Oil and Gas, the world's-largest oil services company, to form Liquid Robotics Oil and Gas. Customers include Conoco Phillips, Chevron, BP and others.
Gysin explained that Wave Gliders would likely be put to sea in fleets of hundreds or thousands, together acoustically sensing both below and on the surface, and transmitting what they find to Boeing aircraft or other vessels. Surveillance aircraft and ships "are expensive, and patrolling is like looking for needles in a haystack," said Gysin. "If you have fleets of Wave Gliders, doing the mundane [sea scanning], we can transmit [what they find] to the more valuable assets, and they can go interdict."
Added Egan Greenstein, senior director of autonomous maritime systems at Boeing -- a brand-new division -- "what you're seeing with the Liquid Robotics agreement is our efforts at stitching together what were standalone capabilities, successful on their own, into a network of solutions that can do maritime security. That network is more scalable, affordable and persistent, and we think it breaks open the maritime surveillance market for a lot of customers that didn't know how they were going to solve these problems in the maritime space."
Greenstein explained that Boeing is likely to sell Wave Glider technology and services to both "defense and civil agencies," meaning organizations like the US Navy and Coast Guard, as well as foreign governments.
While the Wave Glider fleets will be helpful in detecting offensive threats, both Boeing and Liquid Robotics expect them to aid governments in tracking human or drug traffickers; island or border disputes; fish poachers; and other economic threats. "Every nation with a coastline wants to see a little bit further what's going on," Greenstein said. "That's a very expensive endeavor to do persistently."
Greenstein added that though the Liquid Robotics technology is not cheap, it adds a cost-effective method for more fully monitoring the seas.
Ultimately, that's the major promise of the Wave Glider technology: giving its users the ability to watch the seas at all times. With current surveillance systems, Greenstein said, it's too expensive to do that, meaning there are long stretches where no one is watching.
"What we're building with the Liquid Robotics product and technology," Greenstein said, "is the ability to put a grid of sensors out into the water, that can stay for....months or years, and put sensors on those that can act as an extension of the eyes and ears of military commanders."
If you’re looking for a company which seems to embody all the principles of big data entrepreneurship under one roof, then look no further than Kaggle.
Crowd sourcing, predictive modelling, gamification – Kaggle has it all - and has worked out how to turn a profit from them.
The San Francisco-based business awards cash prizes to its teams of “citizen scientists” who compete to untangle big data challenges of all shapes and sizes.
And it isn’t just businesses which are benefitting – by applying the concept of crowd-sourcing to data analytics, they are helping to further scientific and medical research. Their projects include looking deep into the cosmos for traces of dark matter, and furthering research into HIV treatment.
Chief scientist at Google (which has itself benefitted from Kaggle’s research) and Kaggle investor, Hal Varian, describes it as “a way to organise the brainpower of the world’s most talented data scientists and make it accessible to organizations of every size.”
And that’s certainly an intriguing aim – as well as a highly profitable one – in a world where businesses of all sizes are beginning to cotton on to the benefits of big data. Even if every company could afford to set up its own data analytics department, there aren’t nearly enough people trained to do the job to go around!
As with all emerging sciences, there is a shortage of trained data scientists at the moment – but Kaggle has 150,000 of them, ready to farm out to the highest bidder.
As well as charging companies they work with (including Amazon, Facebook, Microsoft and Wikipedia) up to $300 per hour for consultancy work, the company organises competitions – which is where the gamification comes in.
I’ve written about gamification before here – and Kaggle works along the same lines, with the theory being that it is easier to get people to take part in something if it is presented to them as a challenge or competition of some sort.
Current challenges include assisting with schizophrenia diagnosis by identifying the condition from MRA neuroimaging data, and finding the Higgs Boson amidst the mountains of data collected by CERN’s Atlas particle physics experiments.
They are open to anybody to take part in, and all the information (as well as the necessary data sets can be found at Kaggle’s website here.
Although it is frequently reported that they have “over 100,000 data scientists”, these are actually registered users and competitors rather than employees. There are no qualification or experience barriers to registering as a Kaggle data scientist, previous winners have ranged from data science academics and professionals to enthusiastic, knowledgeable amateurs. However certain competitions are occasionally reserved for “masters” – those who have shown they have the right stuff through their previous work with Kaggle.
The company also also recruit its own staff to work on internal projects. In fact they are advertising for recruits now – and although no requirements are listed, other than that applicants be “experienced”, two questions on the application form ask for the mean and standard deviation of two sets of numbers.
The concept is undoubtedly inspired by earlier pioneering work in crowd-sourcing data analysis, such as the Search For Extra-terrestrial Intelligence at Home (SETI @home) project, and a competition organised by Netflix in 2009 offering £1 million to the person who came up with a better algorithm for providing movie recommendations.
Kaggle has taken those idea and expanded on them, basically – it acts as the middle man, with companies or organizations bringing their problems, and Kaggle packaging them into competitions, gathering the contestants and sharing out the rewards.
The data itself is often simulated – and contestants are challenged to come up with methods or algorithms which are more efficient than existing methods at solving the problem in hand. Using simulated data means that issues surrounding access to sensitive data can be sidestepped. Once that is done, the reward – currently up to $30,000, although occasionally much larger for the top projects – is paid.
One of its best known success stories was the Heritage Health Prize, which awarded $3 million last year to the winning entrant, whose algorithm most accurately predicted which patients would be admitted to hospital in the coming 12 months, from a set of medical data.
They also offer the Kaggle In Class service – an academic spin-off of the main brand which offers free data processing tools and simulated challenges. It is intended for use in schools and colleges struggling to meet the challenges of training the first generations of professional data scientists.
Of course like anything new it isn’t without its critics. In particular, questions have been asked about how valuable the research it leads to actually is – often, they say, the biggest challenges in data analysis revolve around what data is needed, and what questions should be asked. Kaggle’s pre-packaged competitions take this element out of the equation. The crowdsourced data scientists might be working on the solution to a particular problem – but is it the correct one? And might there be more relevant data elsewhere, other than that supplied in the competition package?
This might be a fundamental limitation to the competition model, until data collection and distribution evolves to the point where it can be made available to contestants in real-time, and then of course there will be serious privacy and data protection issues to hurdle.
But as it stands today, Kaggle is one of the more forward-thinking innovations in big data, and has done much to raise awareness of the power that crowd sourcing data analysis can bring to businesses and organisations of all sizes.
Finally, please check out my other posts in The Big Data Guru column and feel free to connect with me via Twitter, LinkedIn, Facebook, slideshare and The Advanced Performance Institute.
About: is a best-selling author, keynote speaker and consultant in analytics and big data. He helps companies understand and leverage big data and analytics in a way that improves business performance.