Indico has launched a new open source project called Enso, focused on simplifying the use of transfer learning with natural language.

Enso, as the company puts it, is designed to streamline the benchmarking of embedding and transfer learning methods for a wide variety of natural language processing tasks. So, what’s in it for machine learning engineers and software developers? Well, they get a standard interface and useful tools to compare varied feature representations and target task models.

“The Open Source community is the driving force for innovation in machine learning, and Indico has benefitted from it and embraces the open source effort fully,” said Slater Victoroff, co-founder and CTO at Indico. “Enso is a way for us to give back to the community and continue to promote the benefits of transfer learning to accelerate its adoption and reduce the barriers to machine learning.”

The Enso project is focused on addressing a core set of interrelated problems:
• A lack of academic reproducibility. Due to the use of custom datasets and variations in coding practices, it is difficult to determine whether a new methodology is truly effective.
• Weak baseline benchmarks that limit general applicability. It is important to evaluate new methods on a broad range of datasets to determine whether or not a new approach represents a substantial improvement over alternatives.
• “Overfitting” to specific datasets. Many of the models used for benchmarking are tied to specific datasets making it too difficult to take a model trained for one domain and train it on another.

The Enso project promotes the availability of more general datasets and stronger baselines to compare research against. This will help users ascertain where application of a given method is effective and where it is not, accelerating the application of machine learning for more practical purposes.

“Measuring how well methods perform as the amount of training data increases is critical,” said Madison May, Indico machine learning architect and co-founder. “In real life examples, we often need to select for methods that perform well with only a few hundred labeled training examples. By providing a standard interface for benchmarking, we believe Enso can facilitate the development of more generalized models that have greater value to a broader base of users.”

Enso is compatible with Python 3.4+.