If you don’t want to use ELK to view application logs, CloudWatch is the best alternative. There are no downtimes and is managed by AWS. CloudWatch also supports JSON filtering which makes it extremely handy when dealing with JSON data.
In this post, I will walk through configuring CloudWatch to stream application logs from an EC2 instance.
Our application saves logs using the Monolog library which we set channels, to be effective will also filter out the channels and create a dashboard.
1. Install Cloudwatch Agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm rpm -U ./amazon-cloudwatch-agent.rpm
2. Create a config file for CloudWatch to monitor log files.
Cloudwatch reads its configuration from a JSON file. The most important section is “logs_collected“. There are three important things in this section.
- file_path: This is the path which the contents will be streamed.
- log_group_name: The log group name.
- log_stream_name: The stream name. Having the instance_id as the stream name makes it easy for further debugging.
Cloudwatch can stream multiple log files, to add more than one just add append it to the collect_list.
Paste the contents below into “/opt/aws/amazon-cloudwatch-agent/config.json”. If the file is missing, please create it.
-
Modify the details of collect_list to meet your application logs.
{ "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/var/www/demo/my_log_file.log", "log_group_name": "production-api", "log_stream_name": "{instance_id}" } ] } } }, "metrics": { "append_dimensions": { "AutoScalingGroupName": "${aws:AutoScalingGroupName}", "ImageId": "${aws:ImageId}", "InstanceId": "${aws:InstanceId}", "InstanceType": "${aws:InstanceType}" }, "metrics_collected": { "cpu": { "measurement": [ "cpu_usage_idle", "cpu_usage_iowait", "cpu_usage_user", "cpu_usage_system" ], "metrics_collection_interval": 60, "totalcpu": false }, "disk": { "measurement": [ "used_percent", "inodes_free" ], "metrics_collection_interval": 60, "resources": [ "*" ] }, "diskio": { "measurement": [ "io_time", "write_bytes", "read_bytes", "writes", "reads" ], "metrics_collection_interval": 60, "resources": [ "*" ] }, "mem": { "measurement": [ "mem_used_percent" ], "metrics_collection_interval": 60 }, "netstat": { "measurement": [ "tcp_established", "tcp_time_wait" ], "metrics_collection_interval": 60 }, "swap": { "measurement": [ "swap_used_percent" ], "metrics_collection_interval": 60 } } } }
3. Start the agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -c file:/opt/aws/amazon-cloudwatch-agent/config.json -s
The -s parameter at the end is to restart the agent and to instruct it to use the new config file.
If you visit CloudWatch logs section, you would view the new log group name. The logs are continuously streamed making it very conviennent.
How to filter out JSON data?
We can use JSON filter to search for JSON data in the log. For example, we would like to search for level error codes of 400.
{ $.level = 400 }
The filter option also supports OR condition in JSON.
{ $.level = 400 || $.level = 500 }

I’m a passionate engineer based in London.
Currently, I’m working as a Cloud Consultant at Contino.
Aside my full time job, I either work on my own startup projects or you will see me in a HIIT class 🙂