There are frequent media headlines about both the scarcity of machine learning talent and about the promises of companies claiming their products automate machine learning and eliminate the need for ML expertise altogether. In his keynote at the TensorFlow DevSummit, Google’s head of AI Jeff Dean estimated that there are tens of millions of organizations that have electronic data that could be used for machine learning but lack the necessary expertise and skills. I follow these issues closely since my work at fast.ai focuses on enabling more people to use machine learning and on making it easier to use.
Companies of all sizes are implementing AI, ML, and cognitive technology projects for a wide range of reasons in a disparate array of industries and customer sectors. Some AI efforts are focused on the development of intelligent devices and vehicles, which incorporate three simultaneous development streams of software, hardware, and constantly evolving machine learning models. Other efforts are internally-focused enterprise predictive analytics, fraud management, or other process-oriented activities that aim to provide an additional layer of insight or automation on top of existing data and tooling. Yet other initiatives are focused on conversational interfaces that are distributed across an array of devices and systems. And others have AI & ML project development goals for public or private sector applications that differ in more significant ways than these.
Meanwhile the acquisition by Facebook, no matter what form it takes, looks like a good fit given the U.S. company’s investment in next generation platforms, including VR and AR. It is also another — perhaps, worrying — example of U.S. tech companies hoovering up U.K. machine learning and AI talent early.
Cofactor is a large, structured listing of people, places, and things. Here you can find the description of each topic.