STORAGE DATA ENGINE

Retrieve data extremely fast and efficiently.

Reduce computational resources and infraestructure costs

Excellent Ingest rate

High powered to deal with dataset size or temporal resolutioonlimitations

Excellent ingest rate

High powered to deal with dataset size or temporal resolution limitations

Key Benefits

High-speed ingestion

Explore a dataset in seconds. Making data ingestion faster, more reliable, and easier to scale can help a company improve its business performance.

Advanced storage usage

We managed to optimize the storage features allowing less memory usage​ and less storage disk usage to provide excellent time series ingestion rates.

Specially designed for Data Teams

We simplify data complexity so you can focus on what matters. Shapelets is designed to be user-friendly and intuitive so you can focus on analyzing your data.

Reduce infrastructure costs

Shapelets Data Engine allows you to reduce the total cost of ownership for data storage. We reduce the need for computational resources and, as a result, infrastructure costs.

High speed ingestion

Explore a dataset in seconds. Making data ingestion faster, more reliable, and easier to scale can help a company improve their business performance.

Advanced storage usage

We managed to optimize the storage features allowing less memory usage​ and less storage disk usage to provide excellent time series ingestion rates.

Specially designed for Data Teams

We simplify data complexity so you can focus on what matters. Shapelets is designed to be user-friendly and intuitive so you can focus on analyzing your data.

Reduce infrastructure costs

Shapelets Data Engine allows you to reduce the total cost of ownership for data storage. We reduce the need for computational resources and, as a result, infrastructure costs.

Incredible performance.
A new step for data management.

Shapelets data engine is a software component written in C++ and based on advanced technologies such as Fast, Ceramic and PonyORM. It is capable of storing time series data without time index information, making it extremely efficient and allowing for arbitrary time resolution.​

When executing queries, it uses relational computing techniques to create an execution pipeline that allows you to sequentially read data from multiple files without loading them into system memory, thereby optimizing I/O operations on files.​

It also incorporates optimized resampling and gap-filling functionalities that help reduce data preparation times significantly.

Main Components

Import/export data files in multiple formats

Automate the things you do most often. Save time by turning something that would take multiple steps into just a few. A data scientist can now import from common data file formats, such as CSV, Parquet and Arrow. This feature allows the data scientist to rapidly load the data directly from the UI. Additionally, computation results and data stored in the system can be exported into multiple file formats seamlessly. These improved methods clear the way for complex operations and handling different file formats with our wizard tool.

Pandas interface

Pandas Dataframes are fully integrated with the Shapelets storage system, which lets you work and use this format when loading and exporting data into or from the data engine. The Python API enables loading and exporting stored data in the same format. No need to learn new syntax or unfamiliar datatypes. Reduce time spent in data handling. This Pandas interface helps you get started in a swift.

Connection to streaming data queues

Shapelets Phyton API allows u sing any streaming or data queueing technology to easily store streaming data in the system. This data may come from different data queues such as Kafka or rabbitMQ. Consequently, stored data becomes immediately available for computation and visualization.​ Instant and high data availability. Now you can easily build continual learning systems and monitor sensor infrastructures and systems.

Python Native Data Access

Shapelets Engine has improved the data access experience with enhancements for working with streaming data. The platform boosts Python Native data access by offering a pythonic approach with a syntax based on python comprehensions. This feature eases the execution of SQL queries. The user will be able to choose between pure SQL queries or queries based on Python comprehensions. The system will interpret these to produce efficient queries. It will create an execution pipeline that allows reading the data sequentially on multiple files. Avoid loading into the system’s memory by optimizing I/O operations on files.

Import/export data files in multiple formats

Automate the things you do most often. Save time by turning something that would take multiple steps into just a few. A data scientist can now import from common data file formats, such as CSV, Parquet, and Arrow. This feature allows the data scientist to rapidly load the data directly from the UI. Additionally, computation results and data stored in the system can be exported into multiple file formats seamlessly. These improved methods clear the way for complex operations and handling different file formats with our wizard tool.

Pandas interface

Pandas Dataframes are fully integrated with the Shapelets storage system, which lets you work and use this format when loading and exporting data into or from the data engine. The Python API enables loading and exporting stored data in the same format. No need to learn new syntax or unfamiliar datatypes. Reduce time spent on data handling. This Pandas interface helps you get started in a swift.

Connection to streaming data queues

Shapelets Phyton API allows using any streaming or data queueing technology to easily store streaming data in the system. This data may come from different data queues such as Kafka or rabbitMQ. Consequently, stored data becomes immediately available for computation and visualization.​ Instant and high data availability. Now you can easily build continual learning systems and monitor sensor infrastructures and systems.

Python Native Data Access

Shapelets Engine has improved the data access experience with enhancements for working with streaming data. The platform boosts Python Native data access by offering a pythonic approach with a syntax based on python comprehensions. This feature eases the execution of SQL queries. The user will be able to choose between pure SQL queries or queries based on Python comprehensions. The system will interpret these to produce efficient queries. It will create an execution pipeline that allows reading the data sequentially on multiple files. Avoid loading into the system’s memory by optimizing I/O operations on files.

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. View more
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active
We use cookies on our website. These are small files that your browser will create automatically and that are stored on your end device (laptop, tablet, Smartphone or similar) when you visit our website. Cookies do not cause any damage to your end device, contain no viruses, Trojans or other harmful software. The cookie is used to store information that results from the respective context of the specifically used end device. However, this shall not mean that we directly gain knowledge of your identity this way. Use of cookies serves to make use of our offer more pleasant for you. We use session cookies in order to recognise that you have visited individual pages of our website before. They will be deleted automatically after you leave our website. Furthermore, we also use temporary cookies to optimise user friendliness, which are stored on your end device for a certain specified period. When you visit our website again in order to use our services, it will be automatically recognised that you have visited us before and which input and settings you have made so that you will not have to enter them again. On the other hand, we use cookies in order to statistically record use of our website and to evaluate it for the purpose of optimising our offer to you (see section 5). These cookies enable us to recognise that you have visited us before if you visit our website again. These cookies are deleted automatically after two years in each case. The data processed by cookies are required for the purpose of maintaining our legitimate interests and those of third parties according to Article 6(1)(1)(f) GDPR. Most browsers accept cookies automatically. You may, however, configure your browser so that no cookies will be stored on your computer or that you will always be informed before a new cookie is set up. Complete deactivation of cookies may, however, render you unable to use all functions of our website.
Save settings
Cookies settings