4

I'm building a NodeJS application and trying to use the Elastic Stack to collect the logs. The logs I want are:

  • Error logs
  • Application logs like, user logged in, user performed this task, system performed this task, etc

Now my application is hosted in Kubernetes and I deployed the Elastic Stack also to the same GKE cluster. Now I'm new to Elastic Stack. I have a small idea that we have to send data to logstash and it sends to the elastic search. Then we can visualize in Kibana. I tried to follow several tutorials but still don't have a solid idea on how to do this. To connect my app to the stack. So, may I please know...

  • to where in the stack (elastic search, logstash or filebeat) I should send the logs,
  • how to send the logs from a NodeJS app to an elastic stack hosted in the same Kubernetes cluster,
  • what tools I should use to do this job.

Thanks in advance

Update: Now I managed to write the logs to a file in the node app using winston. So it creates error.log and combined.log in the app root directory:

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.json(),
  defaultMeta: { service: 'user-service' },
  transports: [
    //
    // - Write to all logs with level `info` and below to `combined.log`
    // - Write all logs error (and below) to `error.log`.
    //
    new winston.transports.File({ filename: 'error.log', level: 'error' }),
    new winston.transports.File({ filename: 'combined.log' })
  ]
});

(According to my docker file WORKDIR /usr/src/app):

FROM node:lts as build-stage

WORKDIR /usr/src/app

ENV NPM_CONFIG_LOGLEVEL warn
COPY package.json package.json
COPY yarn.lock yarn.lock
RUN yarn install

COPY . .

RUN yarn build
RUN yarn install --production

FROM node:lts

WORKDIR /usr/src/app

COPY --from=build-stage /usr/src/app /usr/src/app

CMD yarn serve
EXPOSE 8000

Now I have installed filebat but it takes all the other logs from Nginx and stuff but not this. How to tell it to get exactly these two files?

THpubs
  • 7,804
  • 16
  • 68
  • 143

1 Answers1

3

Disclaimer: I’m not a DevOps guy, but used to work with ELK stack from the user perspective.

I think you can start with 3 basic components:

  • Logstash (or filebeat)
  • ElasticSearch itself
  • Kibana

Logstash is supposed to read the logs that NodeJS produces (writes into some file) and send them to the ElasticSearch.

Since you’re running the K8S, the chances are that the Node application is deployed inside some POD. In this case you can consider adding a sidecar container with Logstash process in it as one possible solution. IMO this is the most flexible approach, although if you’re running many pods on one machine (node), then logstash processes will eat your CPUs. If this is a concern, you can configure some volume that will contain the logs from pods, and configure a dedicated pod with LogStash that will map all the logs from all the PODs at the host node.

In any case, the Logstash is supposed to send the data to ElasticSearch and keep track of what has been sent so far. If you have many logs consider creating one index in ES per day with some Time to live (retention period), like 1 week. Then ES will remove the data by removing the index automatically

Kubernetes wise, ES cluster can also be deployed in PODs it should expose ports for access (by default 9200 for http and 9300 for binary access). If you don’t have many services, probably 2-3 servers will be enough, you can even start with a single pod running if you don’t care for high availability

Now Kibana. It’s a UI that connects to Elastic Search and allows slice-and-dice of your log data. You can filter by levels, hosts, message whatever, ES is very good for search so this use case is very well supported.

Again, as for K8S you can deploy Kibana in a Pod with an exposed HTTP port for accessing from the browser. Here pay attention, that this HTTP port will be accessed by the browser that technically not a part of your kubernetes cluster so define the K8S infra respectively.

This is a basic stack that although basic still very flexible and can work for real projects.

Mark Bramnik
  • 39,963
  • 4
  • 57
  • 97
  • Great thanks. Now I managed to write the logs to a file in the node app using winston. So it creates `error.log` and `combined.log` in the app root directory (According to my docker file `WORKDIR /usr/src/app`). Now I have installed filebat but it takes all the other logs from Nginx and stuff but not this. How to tell it to get exactly these two files? – THpubs Nov 04 '19 at 06:16
  • I haven't worked with filebeat, but I believe you should configure the patterns of logs to read from (sources or prospectors as they call them) You should find a filebeat.yml file with configuration for this... https://www.elastic.co/guide/en/beats/filebeat/5.3/filebeat-configuration.html – Mark Bramnik Nov 04 '19 at 06:24
  • Looks like `prospectors` has been depricated. It's giving me an error saying: `'filebeat.prospectors' has been removed` – THpubs Nov 04 '19 at 06:53
  • Well, maybe, it depends on the version of filebeat you use. Here is the link to the latest documentation: https://www.elastic.co/guide/en/beats/filebeat/current/configuring-howto-filebeat.html – Mark Bramnik Nov 04 '19 at 07:00
  • Also check out the file /etc/filebeat/filebeat.reference.yml it should contain an example that you might reference from – Mark Bramnik Nov 04 '19 at 07:00