Quotes for highlighted phrases

Are you having issues
in your data ingestion processes?

Ingestion

Having issues when ingesting large volumes of data? Does it take too long?

Data volume

Slow access to databases leading to delayed ingestion, and insights?

High costs

High cost and misuse of valuable resources such as storage and computing power?

Integration

Difficult integration of software and tools?

Data capacity

Low storage and processing capabilities?

Data granularity

Not enough granularity/detail in your time series?

In short…

are you unable to unify your workflow?

Quotes for highlighted phrases

This way, Shapelets REC optimizes your data processes

Indexing and/or storing any data through powerful vector databases

We show you with an example in Industry 4.0 how Shapelets REC is able to index and store all the data generated in it. Thanks to this, the user gains the power to handle, share, and make use of this data quickly and effectively

Innovation Proposal

Indexing and/or storing any data through powerful vector databases

We show you with an example in Industry 4.0 how Shapelets REC is able to index and store all the data generated in it. Thanks to this, the user gains the power to handle, share, and make use of this data quickly and effectively

You can do things differently

Goodbye expensive data lake solutions

shapelets-rec-big-logo
Shapelets REC integrates natively within Shapelets Store, creating a cost-effective solution for both data science and process engineers to construct virtual data lakes on demand, where data can be use in context with other enterprise data sources.

Cost-efective solutions

Unlimited tags

Real-Time

Indexes data to achieve largest read throughputs.

Long-term storage

Events, alarms, and metrics ideal for compliance, audit, crisis management and data science tasks.

Ultra Ingestion

More than 100 million documents per second.

Connectivity

Through industry standard protocols (ZeroMQ, MQTT, HTTPS, InfluxDB Line Protocol)

Compatibility with other systems

No more data aggregations

Extract the maximum value from data by keeping its original granularity

Native Python API

Indexes data to achieve largest read throughputs

License

FLAT FEE YEARLY SUBSCRIPTION

NO RESTICTIONS

%

OPEN SAVINGS

Seeing to believing

Benchmarks

The fastest on the market

Current market comparison

SOLUTION

DATA MODEL

Tag-value

Relational

Tag-value

Relational

Hierarchical

Tag-value

SQL SUPPORT

Extensive

Extensive

Limited

Extensive

No

Limited

DISTRIBUTED COMPUTING

No
(Single machine, can be scaled horizontally)

No
(Single machine, can be scaled horizontally)

Yes
(Clustering, Sharding, High Availability)

Yes
(Automatic partitioning, Replication)

No
(Single machine, can be scaled horizontally)

Yes
(Clustering, Sharding)

TIME FOR
1M WRITES

~0.1 sec
~1.8 sec
~10.9 sec
~12.6 sec
~23.3 sec
~40 sec

TIME FOR
10M WRITES

~1 sec
~24.3 sec
~99.9 sec
~145 sec
NA
NA

FEATURES

Built-in HA, Data compression, Tiered Storage

ACID Transactions, Built-in HA, Low Latency

 Continuous Queries, Retention Policies, Downsampling

Hypertables, Continuous Aggregates, Data Compression

Built-in Dashboarding, Graphing, Alerting

Built-in Aggregation, Data Rollups, Tiered Storage

Setup

Shapelets REC

REC server process installation, in cloud or on premise

More than 100 million messages per second using a simple server

 

Select your storage configuration

(RAM only, RAM + local storage or RAM + local storage + cloud storage)

 

Start logging event data coming from

ZeroMQ, MQTT, HTTPS, InfluxDB Line Protocol, etc