Businesses who harness data own markets, and those who cannot have a questionable fate. But, outside of the lighthouse tech companies, data scientists are often blocked from creating value and demonstrating it to others in their organisation.
This is largely because deploying data science is tricky stuff, and requires significant, complex engineering -- especially as companies make the jump to the cloud. Data scientists want to be free to focus on the science, not spend their time learning about containers, operations, and cloud infrastructure. Instead, they're often forced to rely on engineering teams or central IT to deploy experiments.
This is a huge problem because this process is fundamentally incompatible with the way data teams need to work.
Data science is an experimental and creative task. Creating value relies on being able to transform intellectual curiosity into real experiments. Teams need to be reflexive and exploratory -- not waiting on tickets or getting bogged down in bureaucracy. Any time a data scientist has to raise a ticket with IT, someone is waiting, and an idea is not getting tested.
Creating value for the business first relies on being able to transform intellectual curiosity into real experiments.
Additionally, engaging and aligning with stakeholders and business leaders is a big concern for many analytics teams. To accomplish this effectively, data scientists need to move away from offline, static data and make data come to life through live experiments, which becomes tricky when they aren't in control of the deployment process.
According to Rexer Analytic's 2016 Survey, only 12% of surveyed data scientists report their models are usually deployed. Perhaps even more shockingly, only 37% reported their models were deployed most of the time. This state of affairs prompted us to build NStack, which lets data scientists self-serve and deploy their models and run there as they were intended to run -- in the wild, connected to real data, and demonstrating real value.
Desktop to Experiment in Two Steps
NStack puts power into the hands of data scientists by letting them deploy local Python models as cloud APIs in a few seconds -- no ops or IT team needed.
Once deployed, these models can instantly be turned into HTTP endpoints, or connected to hundreds of data-sources, such as data-lakes, third-party APIs, file-stores, to build sophisticated analytics workflows.
Models and workflows can be shared, reused, and composed together. Data teams can build something once and never have to do it again, instead of restarting from scratch and redoing the same job each time. Additionally, everything is automatically monitored, versioned, and scaleable, so data scientists can focus on exclusively science and not infrastructure.
Where does NStack fit in?
Our customers use NStack to:
- Run churn models every evening on the data warehouse, and write results to Salesforce
- Real-time propensity analysis on cookies and write to DMPs
- Calculate customer attribution with GA and Twitter Ads
On average, it takes a single person a few minutes to deploy a model and build a workflow, instead of a team of four taking weeks or months (or years!). Our goal is for teams to play, hack, create, and run as many experiments as possible. If your team can't deploy experiments quickly enough, enter your email in below and a member of our team will be in touch shortly to arrange a quick demo.
"LEGO Collectible Minifigures Series 7 : Computer Programmer" by wiredforlego is licensed under CC by-sa 2.0