Skip to main content
Skip to main content

ClickHouse OSS quick start

In this quick start tutorial, we'll get you set up with OSS ClickHouse in a few easy steps. You'll use the ClickHouse CLI clickhousectl to install ClickHouse, start a ClickHouse server, connect to your server to create a table, insert data into it, and run a SELECT query.

Install the ClickHouse CLI

The ClickHouse CLI (clickhousectl) helps you install and manage local ClickHouse versions, launch servers, and run queries. Install it with:

curl https://clickhouse.com/cli | sh

A chctl alias is also created automatically for convenience.

Install ClickHouse

ClickHouse runs natively on Linux and macOS, and runs on Windows via the WSL.

Use the CLI to install the latest stable version of ClickHouse:

clickhousectl local install stable
Note

This isn't the recommended way to install ClickHouse for production. If you're looking to install a production instance of ClickHouse, please see the install page.

Start the server

Start a ClickHouse server instance:

clickhousectl local server start --name my-first-server

The server runs in the background by default. To verify it's running:

clickhousectl local server list

Start the client

Connect to your running ClickHouse server:

clickhousectl local client --name my-first-server

You should see a smiling face as it connects to your service running on localhost:

my-host :)

Create a table

Use CREATE TABLE to define a new table. Typical SQL DDL commands work in ClickHouse with one addition - tables in ClickHouse require an ENGINE clause. Use MergeTree to take advantage of the performance benefits of ClickHouse:

CREATE TABLE my_first_table
(
    user_id UInt32,
    message String,
    timestamp DateTime,
    metric Float32
)
ENGINE = MergeTree
PRIMARY KEY (user_id, timestamp)

Insert data

You can use the familiar INSERT INTO TABLE command with ClickHouse, but it is important to understand that each insert into a MergeTree table causes what we call a part in ClickHouse to be created in storage. These parts later get merged in the background by ClickHouse.

In ClickHouse, we try to bulk insert lots of rows at a time (tens of thousands or even millions at once) to minimize the number of parts that need to get merged in the background process.

In this guide, we won't worry about that just yet. Run the following command to insert a few rows of data into your table:

INSERT INTO my_first_table (user_id, message, timestamp, metric) VALUES
    (101, 'Hello, ClickHouse!',                                 now(),       -1.0    ),
    (102, 'Insert a lot of rows per batch',                     yesterday(), 1.41421 ),
    (102, 'Sort your data based on your commonly-used queries', today(),     2.718   ),
    (101, 'Granules are the smallest chunks of data read',      now() + 5,   3.14159 )

Query your new table

You can write a SELECT query just like you would with any SQL database:

SELECT *
FROM my_first_table
ORDER BY timestamp

Notice the response comes back in a nice table format:

┌─user_id─┬─message────────────────────────────────────────────┬───────────timestamp─┬──metric─┐
│     102 │ Insert a lot of rows per batch                     │ 2022-03-21 00:00:00 │ 1.41421 │
│     102 │ Sort your data based on your commonly-used queries │ 2022-03-22 00:00:00 │   2.718 │
│     101 │ Hello, ClickHouse!                                 │ 2022-03-22 14:04:09 │      -1 │
│     101 │ Granules are the smallest chunks of data read      │ 2022-03-22 14:04:14 │ 3.14159 │
└─────────┴────────────────────────────────────────────────────┴─────────────────────┴─────────┘

4 rows in set. Elapsed: 0.008 sec.

Insert your own data [#insert-own-data}

The next step is to get your own data into ClickHouse. We have lots of table functions and integrations for ingesting data. We have some examples in the tabs below, or you can check out our Integrations page for a long list of technologies that integrate with ClickHouse.

Use the s3 table function to read files from S3. It's a table function - meaning that the result is a table that can be:

  1. used as the source of a SELECT query (allowing you to run ad-hoc queries and leave your data in S3), or...
  2. insert the resulting table into a MergeTree table (when you're ready to move your data into ClickHouse)

An ad-hoc query looks like:

SELECT
passenger_count,
avg(toFloat32(total_amount))
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_0.gz',
'TabSeparatedWithNames'
)
GROUP BY passenger_count
ORDER BY passenger_count;

Moving the data into a ClickHouse table looks like the following, where nyc_taxi is a MergeTree table:

INSERT INTO nyc_taxi
SELECT * FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_0.gz',
'TabSeparatedWithNames'
)
SETTINGS input_format_allow_errors_num=25000;

View our collection of AWS S3 documentation pages for lots more details and examples of using S3 with ClickHouse.


Explore