Eight Roads Ventures launches new 375M scale-up fund for European and Israeli startups

Eight Roads Ventures, the proprietary investment arm of Fidelity International, is officially launching its new European fund today.

Targeting scale-ups in Europe and Israel, ‘Eight Roads Ventures Europe’ will have $375 million in capital to deploy, mostly at the Series B and Series C stages but also in scale-ups that although bootstrapped have found market fit and traction and are in need of growth capital.

It plans to back a total of 15 to 20 companies, with an average investment size of between $10 million and $30 million, and will invest right across the region. Eight Roads Ventures also plans to remain sector agnostic, although enterprise, consumer, fintech and healthcare IT are name checked as markets of particular interest.

“The strategy continues to be to find European scale-ups — and by Europe we mean Europe and Israel — and help them become global winners,” Davor Hebel, Managing Partner and Head of Eight Roads Ventures Europe, tells me during a call.

“We are very excited about the health of the European ecosystem. We see more and more best young talent deciding to choose their career in entrepreneurship, and we see more and more early-stage funds popping up in different regions. And our strong belief is that there is no one place where great European companies are going to come up”.

Describing Europe as “truly the most scattered and distributed geography,” Hebel cites recent Eight Roads Ventures investments in companies founded in Hamburg, Malmo, Tel Aviv, and Paris, not just the most popular hubs of London, Berlin and Stockholm. “The real focus is to find great companies no matter where they are and to help them scale up from, typically, thirty to fifty employees to five hundred or one thousand employees,” he says.

Scaling up is also where Eight Roads Ventures sees a “resource gap” in the European market. This includes a big difference in the amount of growth capital available to companies in the U.S. compared to those in Europe. However, it’s not just money, but also a gap in knowledge of how to scale.

“This is where we want to bring our growth tool kit, and help companies around things like scaling sales and marketing, and expanding internationally, building layers of management, all the things that European companies are looking to do as they become globally and regionally successful,” says

Read more:

After launching in the US Instagram expands its shoppable posts features to business users in eight other countries

Instagram is launching its Shopping feature for business accounts to eight new countries: Canada, Brazil, the United Kingdom, Germany, France, Italy, Spain and Australia. The photo sharing app first began testing shoppable photo tags in November 2016 before making Shopping on Instagram available to businesses in the United States last year.

Since Instagram doesn’t allow links in captions, Shopping on Instagram is intended to make it easier for brands to drive followers to their e-commerce stores, while ensuring that those users continue spending time in the app before clicking away. Before Instagram launched the feature, several third-party services were created to make posts shoppable and remain popular, including Like2Buy and LikeToKnowIt.

When a post using Shopping for Instagram is tapped, it displays popups with prices and a link to a new page within the app with more information and a “Shop Now” button that directs users to the product on the brand’s own online store. Instagram has said it plans to monetize the feature by allowing business users to display shoppable photos to people who don’t already follow them.

Instagram says about half of its daily active users currently follow an “active shopping business,” and today’s geographic expansion of Shopping for Instagram covers its second-largest market (after the U.S.), Brazil.

In a press statement, Instagram head of business Jim Squires said “People come to Instagram every day to discover and buy products from their favorite businesses. We want to be that seamless experience. Whether it’s a local artisan, florist or clothing store, shopping directly on Instagram has never been easier.”

Read more:

IBM launches deep learning as a service inside its Watson Studio

IBM’s Watson Studio is the company’s service for building machine learning workflows and training models, is getting a new addition today with the launch of Deep Learning as a Service (DLaaS). The general idea here, which is similar to that of competing services, is to enabled a wider range of businesses to make user of recent advances in machine learning by lowering the barrier of entry.

With these new tools, developers can develop their models with the same open source frameworks they are likely already using (think TensorFlow, Caffe, PyTorch, Keras etc.). Indeed, IBM’s new service essentially offers these tools as cloud-native services and developers can use a standard Rest API to train their models with the resources they want — or within the budget they have. For this service, which offers both a command-line interface, Python library or interactive user interface, that means developers get the option to choose between different Nvidia GPUs, for example.

The idea of a managed environment for deep learning isn’t necessarily new, With the Azure ML Studio, Microsoft offers a highly graphical experience for building ML models, too, after all. IBM argues that its service offers a number of distinct advantages, though. Among other things, the service offers a drag-and-drop neural network builder that allows even non-programmers to configure and design their neural networks.

In addition, IBM’s tools will also automatically tune hyperparameters for its users. That’s traditionally a rather time-consuming processes when done by hand and something that sits somewhere between art and science.

Read more:

Apple IBM add machine learning to partnership with Watson-Core ML coupling

Apple and IBM may seem like an odd couple, but the two companies have been working closely together for several years now. That has involved IBM sharing its enterprise expertise with Apple and Apple sharing its design sense with IBM. The companies have actually built hundreds of enterprise apps running on iOS devices. Today, they took that friendship a step further when they announced they were providing a way to combine IBM Watson machine learning with Apple Core ML to make the business apps running on Apple devices all the more intelligent.

The way it works is a customer builds a machine learning model using Watson, taking advantage of data in an enterprise repository to train the model. For instance, a company may want to help field service techs point their iPhone camera at a machine and identify the make and model to order the correct parts. You could potentially train a model to recognize all the different machines using Watson’s image recognition capability.

The next step is to convert that model into Core ML and include it in your custom app. Apple introduced Core ML at the Worldwide Developers Conference last June as a way to make it easy for developers to move machine learning models from popular model building tools like TensorFlow, Caffe or IBM Watson to apps running on iOS devices.

After creating the model, you run it through the Core ML converter tools and insert it in your Apple app. The agreement with IBM makes it easier to do this using IBM Watson as the model building part of the equation. This allows the two partners to make the apps created under the partnership even smarter with machine learning.

“Apple developers need a way to quickly and easily build these apps and leverage the cloud where it’s delivered. [The partnership] lets developers take advantage of the Core ML integration,” Mahmoud Naghshineh, general manager for IBM Partnerships and Alliances explained.

To make it even easier, IBM also announced a cloud console to simplify the connection between the Watson model building process and inserting that model in the application running on the Apple device.

Over time, the app can share data back with Watson and improve the machine learning algorithm running on the edge device in a classic device-cloud partnership. “That’s the beauty of this combination. As you run the application, it’s real time and you don’t need to be connect

Read more:

Tempe police chief says Uber “preliminarily…would likely not be at fault” for fatal crash

The chief of police in Tempe, Arizona, where an Uber self-driving car just hit and killed a pedestrian, has told the San Francisco Chronicle that “I suspect preliminarily it appears that the Uber would likely not be at fault in this accident.”

Chief Sylvia Moir explained after viewing the car’s own video of the event that “she came from the shadows right into the roadway,” and that “it would have been difficult to avoid this collision in any kind of mode.”

A lighted crosswalk was nearby but the place where the accident occurred was in the dark. The car would almost certainly have been aware of the pedestrian, but it’s also possible that she moved out in front of the car faster than the car could reasonably be stopped.

The details are known only to Uber and the authorities at present and it wouldn’t be right to speculate too far, but Moir certainly seems to suggest that the latter scenario is a possibility.

Read more:


The Opinion Poll


National Weather

Click on map for forecast