Storage
Data engine

Retrieve data extremely fast and efficiently.
Reduce computational resources and infrastructure costs.

Excellent data access

High-powered to work with huge data. Optimal with very large time series.

 

Key Benefits

High-speed loading

Explore a dataset in seconds.

Making data ingestion faster, more reliable, and easier to scale can help a company improve its business performance.

Advanced storage usage

Handle datafiles without loading in-memory

We managed to optimize the storage features without the need for large ram memory and storage disk usage to provide excellent time series ingestion rates.

Specially designed for Data Teams

We simplify data complexity so you can focus on what matters

Shapelets API is designed to be user-friendly and intuitive so you can focus on analyzing your data.

Reduce infrastructure costs

Shapelets Data Engine allows you to reduce the total cost of ownership for data storage.

We reduce the need for computational resources and, as a result, infrastructure costs.

High-speed ingestion

Explore a dataset in seconds. Making data ingestion faster, more reliable, and easier to scale can help a company improve its business performance.

 

Advanced storage usage

We managed to optimize the storage features allowing less memory usage​ and less storage disk usage to provide excellent time series ingestion rates.

 

Specially designed for Data Teams’ Algorithmia

We simplify data complexity so you can focus on what matters. Shapelets is designed to be user-friendly and intuitive so you can focus on analyzing your data.

 

Reduce infrastructure costs

Shapelets Data Engine allows you to reduce the total cost of ownership for data storage. We reduce the need for computational resources and, as a result, infrastructure costs.

 

Incredible performance. A new step for data management.

Shapelets data engine is a software component written in C++ and based on advanced technologies such as Fast, Ceramic and PonyORM. It is capable of storing time series data without time index information, making it extremely efficient and allowing for arbitrary time resolution.​

When executing queries, it uses relational computing techniques to create an execution pipeline that allows you to sequentially read data from multiple files without loading them into system memory, thereby by optimizing I/O operations on files.

Main Components

Import/export data files in multiple formats

Automate the things you do most often. Save time by turning something that would take multiple steps into just a few. A data scientist can now import from common data file formats, such as CSV, Parquet, and Arrow. Additionally, computation results and data stored in the system can be exported into multiple file formats seamlessly. These improved methods clear the way for complex operations and handling different file formats.

vector illustration of screens with data app and code
vector illustration of screens with data app and code

Pandas interface

Pandas Dataframes are fully integrated with the Shapelets storage system, which lets you work and use this format when loading and exporting data into or from the data engine. The Python API enables loading and exporting stored data in the same format. No need to learn new syntax or unfamiliar datatypes. Reduce time spent on data handling. This Pandas interface helps you get started in a swift.

Connection to streaming data queues

Shapelets Phyton API allows using any streaming or data queueing technology to easily store streaming data in the system. This data may come from different data queues such as Kafka or rabbitMQ. Consequently, stored data becomes immediately available for computation and visualization.​ Instant and high data availability. Now you can easily build continual learning systems and monitor sensor infrastructures and systems.

vector illustration of screens with data app and code
demo-page-data-app-terminal

Python Native Data Access

Shapelets Engine has improved the data access experience with enhancements for working with streaming data. The platform boosts Python Native data access by offering a pythonic approach with a syntax based on python comprehensions. This feature eases the execution of SQL queries. The user will be able to choose between pure SQL queries or queries based on Python comprehensions. The system will interpret these to produce efficient queries. It will create an execution pipeline that allows reading the data sequentially on multiple files. Avoid loading into the system’s memory by optimizing I/O operations on files. ones.