From 15c9e98a81f5509eb503213fd4ab81651a6e4332 Mon Sep 17 00:00:00 2001 From: Nitish Tiwari Date: Tue, 15 Nov 2022 21:34:06 +0530 Subject: [PATCH 1/2] temp --- README.md | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 70a95d177..78b3e3179 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@

-

Parseable is an open source log storage and observability platform.

+

Log observability platform.

Quick Start | Documentation | @@ -14,18 +14,27 @@ Live Demo

-## Motivation +## :wave: Introduction + +Parseable is a open source log observability platform. Written in Rust, it is designed for simplicity of deployment and use. It is compatible with standard logging agents via their HTTP output. Parseable also offers a builtin GUI for log query and analysis. + +We're focussed on + +* Simplicity - ease of deployment and use. +* Efficiency - lesser CPU, Memory usage. +* Extensibility - freedom to do more with event data. +* Performance - lower latency, higher throughput. + +## :dart: Motivation Given the analytical nature of log data, columnar formats like Parquet are the best way to store and analyze. Parquet offers compression and inherent analytical capabilities. However, indexing based text search engines are _still_ prevalent. We are building Parseable to take full advantage of advanced data formats like Apache Parquet and Arrow. This approach is simpler, efficient and much more scalable. Parseable is developer friendly, cloud native, logging platforms today that is simple to deploy and run - while offering a rich set of features. -## How it works +## :grey_question: How it works Parseable exposes REST API to ingest and query log data. Under the hood, it uses Apache Arrow and Parquet to handle and compress high volume log data. All data is stored in S3 (or compatible systems). Parseable also has a bundled web console to visualize and query log data. -#### Key differentiators - - Written in Rust. Low CPU & memory footprint, with low latency, high throughput. - Open data format (Parquet). Complete ownership of data. Wide range of possibilities for data analysis. - Single binary / container based deployment (including UI). Deploy in minutes if not seconds. From 0cbd441455f37e837a69b48769f36c31adf6a1df Mon Sep 17 00:00:00 2001 From: Nitish Tiwari Date: Tue, 15 Nov 2022 21:59:33 +0530 Subject: [PATCH 2/2] temp --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 78b3e3179..8aeaabb7c 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@

-

Log observability platform.

+

Cloud native log observability

Quick Start | Documentation | @@ -31,7 +31,7 @@ Given the analytical nature of log data, columnar formats like Parquet are the b Parseable is developer friendly, cloud native, logging platforms today that is simple to deploy and run - while offering a rich set of features. -## :grey_question: How it works +## :question: How it works Parseable exposes REST API to ingest and query log data. Under the hood, it uses Apache Arrow and Parquet to handle and compress high volume log data. All data is stored in S3 (or compatible systems). Parseable also has a bundled web console to visualize and query log data. @@ -41,7 +41,7 @@ Parseable exposes REST API to ingest and query log data. Under the hood, it uses - Indexing free design. Lower CPU and storage overhead. Similar levels of performance as indexing based systems. - Kubernetes and Cloud native design, build ground up for cloud native environments. -## Installing +## :white_check_mark: Installing Run the below command to deploy Parseable in demo mode with Docker. @@ -62,7 +62,7 @@ Prefer other platforms? Check out installation options (Kubernetes, bare-metal), Instead of installing locally, you can also try out Parseable on our [Demo instance](https://demo.parseable.io). Credentials to login to the dashboard are `parseable` / `parseable`. -## Usage +## :100: Usage If you've already deployed Parseable using the above Docker command, use below commands to create stream and post event(s) to the stream. Make sure to replace `` with the name of the stream you want to create and post events (e.g. `my-stream`). #### Create a stream @@ -97,13 +97,13 @@ curl --location --request POST 'http://localhost:8000/api/v1/logstream/