Content area
Full Text
In the State of Modelops 2022 report, 51% of large enterprises had done early-stage pilots or experiments in artificial intelligence but have yet to put them into production. Only 38% reported they can answer executive questions on the return on investment on AI, and 43% said that their company is ineffective at finding and fixing issues in a timely matter.
These challenges raise the question of how to improve the productivity of developing, delivering, and managing ML models in production.
MLops or modelops? You may need both
Now data scientists have plenty of analytics tools to choose from to develop models, including Alteryx, AWS SageMaker, Dataiku, DataRobot, Google Vertex AI, KNIME, Microsoft Azure Machine Learning, SAS, and others. There are also MLops platforms to help data science teams integrate their analytics tools, run experiments, and deploy ML models during the development process.
Rohit Tandon, general manager for ReadyAI and managing director at Deloitte Consulting, explains the role of MLops in large-scale AI deployments. “As enterprises seek to scale AI development capacity from dozens to hundreds or even thousands of ML models, they can benefit from the same engineering and operational discipline that devops brought to software development. MLops can help automate manual, inefficient workflows and streamline all steps of model construction and management.”
Although many MLops platforms support deployment and monitoring models in productions, their primary function is to serve data scientists during the development, testing, and improving processes. Modelops platforms and practices aim to fill a gap by providing collaboration, orchestration, and reporting tools about what ML models are running in production and how well they perform from operational, compliance, and...