Логирование ошибок node js

Logging refers to the process of recording some detail about application
behavior and storing it in a persistent medium. Having a good logging system is
a key feature that helps developers, sysadmins, and support teams to monitor the
behavior of an application in production, and solve problems as they appear.

Troubleshooting is not the only valid reason to log. Product teams and designers
also use logs to track user behavior (such as A/B testing), and the marketing
department can measure the impact of a specific marketing campaign through the
logs.

In short, logging can provide value for every department in an organization
provided that the correct things are logged in the right format, and analyzed
using a specialized tool.

This tutorial will explain the basics of logging in Node.js starting from the
built-in console module, then proceed to topics like choosing a logging
framework, using the right log format, structuring your messages and sending
them to a log management system for long-term storage and further analysis.

🔭 Want to centralize and monitor your Node.js logs?

Head over to Logtail and start ingesting your logs in 5 minutes.

Prerequisites

Before you proceed with this article, ensure that you have a recent version of
Node.js and npm installed locally on your
machine. Also, you can sign up for a free
Logtail account if you’d like to centralize
your application logs in one place but this is not required to follow through
with this tutorial.

What should you log?

Before we discuss the mechanics of Node.js logging, let’s discuss the general
things that you should be logging in a Node.js application. It’s possible to log
too much or too little so these general guidelines are helpful when determining
what to log:

  • Think about the critical aspects of your program and identify which
    information you will want to debug an issue in production.
  • Log as much as possible in development at the appropriate level and turn off
    the superfluous details in production through an environmental variable. You
    can always turn them back on if you need to trace a problem more closely.
  • Log data that can help you profile your code in the absence of specialized
    tools.
  • Log your Node.js errors, whether they are operational or not.
  • Log uncaught exceptions and unhandled promise rejections at the highest log
    level so that it can be fixed promptly.

It might also be helpful to think about what not to log:

  • Don’t log sensitive user information such as passwords, credit card details.
  • Avoid logging anything that can cause you to fall afoul of any relevant
    regulations in places where your business operates.

Following these simple rules will help if you’re just getting started with
logging. As your application evolves, you’ll figure out how valuable your logs
are and update your logging strategy accordingly.

Logging using the console module

The most common way to log in Node.js is by using methods on the console
module (such as log()). It’s adequate for basic debugging, and it’s already
present in the global scope of any Node.js program. All the methods provided on
the console module log to the console, but there are ways to redirect the
output to a file as you’ll see shortly. These are the console methods commonly
used for logging in Node.js:

  • console.error(): used for serious problems that occurred during the
    execution of the program.
  • console.warn(): used for reporting non-critical unusual behavior.
  • console.trace(): used for debugging messages with extended information (a
    stack trace) about application processing.
  • console.info(), console.log(): used for printing informative messages
    about the application.

Let’s look at a quick example of using the logging methods on the console
object. Create a main.js file in the root of your project, and populate it
with the following code:

const fruits = [
  'apple',
  'banana',
  'grapefruit',
  'mango',
  'orange',
  'melon',
  'pear',
];

const basket = [];

function addToBasket(item) {
  if (basket.length < 5) {
    // log the action
    console.info(`Putting "${item}" in the basket!`);
    basket.push(item);
  } else {
    // log an error if the basket is full
    console.error(`Trying to put "${item}" in the full basket!`);
  }
}

for (const fruit of fruits) {
  addToBasket(fruit);
}

// log the current basket state
console.log('Current basket state:', basket);

Save the file, then run the program using the command below:

You should observe the following output:

Putting "apple" in the basket!
Putting "banana" in the basket!
Putting "grapefruit" in the basket!
Putting "mango" in the basket!
Putting "orange" in the basket!
Trying to put "melon" in the full basket!
Trying to put "pear" in the full basket!
Current basket state: [ 'apple', 'banana', 'grapefruit', 'mango', 'orange' ]

Now that we can log to the console, let’s look at a way to store our log output
in a log file for further processing. You can do this by redirecting the output
of the program to a file as shown below:

node main.js > combined.log

You’ll notice that the following error logs were printed to the console:

Trying to put "melon" in the full basket!
Trying to put "pear" in the full basket!

Meanwhile, you’ll also notice that a new combined.log file is present in the
current working directory. If you inspect the file in your editor or with cat,
you’ll see the following contents:

Putting "apple" in the basket!
Putting "banana" in the basket!
Putting "grapefruit" in the basket!
Putting "mango" in the basket!
Putting "orange" in the basket!
Current basket state: [ 'apple', 'banana', 'grapefruit', 'mango', 'orange' ]

The reason why the error logs were printed to the console instead of being sent
to the combined.log file is that the error() method prints its messages to
the standard error (stderr) and the > operator works for messages printed to
the standard output (stdout) alone (both info() and log() print to
stdout).

To ensure that error logs are also placed in a file, you need to use the 2>
operator as shown below:

node main.js > main.log 2> error.log

Using > main.log lets you redirect the stdout contents to the main.log
file while 2> error.log redirects the contents of stderr to the error.log
file. You can inspect the contents of both files using cat as shown below:

This outputs the following:

Putting "apple" in the basket!
Putting "banana" in the basket!
Putting "grapefruit" in the basket!
Putting "mango" in the basket!
Putting "orange" in the basket!
Current basket state: [ 'apple', 'banana', 'grapefruit', 'mango', 'orange' ]

Next, display the contents of the error.log file:

Which should yield the following output:

Trying to put "melon" in the full basket!
Trying to put "pear" in the full basket!

If you want to log both types of messages to a single file, you can do the
following:

node main.js > app.log 2>&1

This would redirect the stdout file descriptor to the app.log and redirect
stderr to stdout.

Putting "apple" in the basket!
Putting "banana" in the basket!
Putting "grapefruit" in the basket!
Putting "mango" in the basket!
Putting "orange" in the basket!
Trying to put "melon" in the full basket!
Trying to put "pear" in the full basket!
Current basket state: [ 'apple', 'banana', 'grapefruit', 'mango', 'orange' ]

To learn more about input or output redirection, you can read more about file
descriptors on the wooledge pages.
Don’t forget to check out the
Node.js Console documentation to learn
more about the other features of the console module.

Why you need a logging framework

Using the methods on the console module is a good way to get started with
Node.js logging, but it’s not adequate when designing a logging strategy for
production applications due to its lack of convenience features like log levels,
structured JSON logging, timestamps, logging to multiple destinations, and more.
These are all features that a good logging framework takes care
of so that you can focus on the problem you’re trying to
solve instead of logging details.

There are a lot of options out there when it comes to logging frameworks for
Node.js. They mostly offer similar features so choosing between them often boils
down to the one whose API you love the most. Here’s a brief overview of the most
popular logging packages on NPM that you can check out:

  • Winston: the most popular and
    comprehensive logging framework for Node.js
  • Pino: offers an extensive feature-set and claims to be
    faster than competing libraries.
  • Bunyan: provides structured JSON
    logging out of the box.
  • Roarr: use this if you need a single
    library for logging in Node.js and the browser.

In this tutorial, we’ll be demonstrating some basic features of a logging
framework through Winston since it remains the most popular logging framework
for Node.js at the time of writing.

Getting started with Winston

Winston is a multi-transport async
logging library for Node.js with rich configuration abilities. It’s designed to
be a simple and universal logging library with maximal flexibility in log
formatting and transports (storage). You can install Winston in your project
through npm:

After installing Winston, you can start logging right away by using the default
logger which is accessible on the winston module. Clear your main.js file
before populating it once more with the following code:

const winston = require('winston');

const consoleTransport = new winston.transports.Console();

winston.add(consoleTransport);

winston.info('Getting started with Winston');
winston.error('Here is an error message');

Before you can use the default logger, you need to set at least one transport
(storage location) on it because none are set by default. In the snippet above,
we’ve set the
Console transport
which means that subsequent log messages will be outputted to the Node.js
console. Run the program to see this in action:

You should observe the following output:

{"level":"info","message":"Getting started with Winston"}
{"level":"error","message":"Here is an error message"}

Notice that the default logger has been configured to format each log message as
JSON instead of plain text. This is done to ensure that log entries are
structured in a consistent manner that allows them to be easily searched,
filtered and organised by a log management system.

Without structured logging, finding and extracting the useful data that is
needed from your logs will be a tedious experience because you’ll likely need to
write a custom parsing algorithm for extracting relevant data attributes from
plain text messages, and this task can become quite complicated if the
formatting of each message varies from entry to entry.

Winston uses JSON by default, but it provides some other predefined options like
simple, cli, and logstash which you can investigate further. You can also
create a completely custom format by using winston.format. Under the hood,
this uses the logform module to format
the messages.

Since JSON is both human and machine-readable, it remains the go-to format for
structured logging in most Node.js applications. We recommend that you stick
with it unless you strongly prefer some other structured format (such as
logfmt for example).

Understanding Log levels

In the previous code block, you’ll notice the presence of the level property
in each log entry. The value of this property indicates how important the
message is to the application. Notably, this is absent in the native Console
module, and it’s one of the major reasons why its methods are unsuitable for
serious production-ready applications.

In general, log levels indicate the severity of the logging message. For
example, an info message is just informative, while a warn message indicates
an unusual but not critical situation. An error message indicates that
something failed but the application can keep working, while a fatal or
emergency message indicates that a non-recoverable error occurred and
immediate attention is needed to resolve the issue.

The exact log levels available to you will depend on your framework of choice,
although this is usually configurable. Winston provides six log levels on its
default logger and six corresponding methods which are ordered from the most
severe to the least severe below:

const levels = {
  error: 0,
  warn: 1,
  info: 2,
  http: 3,
  verbose: 4,
  debug: 5,
  silly: 6
};

Severity ordering in Winston conforms to the order specified by the
RFC5424 document in which the most severe
level is numbered 0, and each subsequent level ascends numerically ensuring
that the least severe level has the highest number.

winston.error('error');
winston.warn('warn');
winston.info('info');
winston.verbose('verbose');
winston.debug('debug');
winston.silly('silly');

The log level for an entry has an important consequence when logging. It
determines if the entry will be emitted by the logger during program execution.
You can test this out by placing each of the six logging methods above in your
main.js file and executing it. You’ll notice that only the first three appear
in the Node.js console:

{"level":"error","message":"error"}
{"level":"warn","message":"warn"}
{"level":"info","message":"info"}

That’s because the default logger is set to log at the info level by default.
This means that only messages with a minimum severity of info (or a maximum
number of 2) will be logged to the configured transport (the console in this
case). This behavior can be changed by customizing the level property on the
transport as shown below:

. . .

winston.add(consoleTransport);

consoleTransport.level = 'silly';

winston.error('error');
winston.warn('warn');
winston.info('info');
winston.verbose('verbose');
winston.debug('debug');
winston.silly('silly');

With the minimum severity level now set to silly, all the logging methods
above will now produce some output:

{"level":"error","message":"error"}
{"level":"warn","message":"warn"}
{"level":"info","message":"info"}
{"level":"verbose","message":"verbose"}
{"level":"debug","message":"debug"}
{"level":"silly","message":"silly"}

It’s important to log at the appropriate level so that it’s easy to distinguish
between purely informative events and potentially critical problems that need to
be addressed immediately. Log levels also help to reduce the verbosity of
logging so that some messages are essentially turned off where they are not
needed. Usually, production environments will run the application at the info
level by default while testing or debugging environments typically run at the
debug or the lowest level in the hierarchy.

This setting is usually controlled through an environmental variable to avoid
modifying the application code each time the log level needs to be changed.

consoleTransport.level = process.env.LOG_LEVEL;

A starting point for your log entries

A good log entry should consist of at least the following three fields:

  • timestamp: the time at which the entry was created so that we can filter
    entries by time.
  • level: the log level, so that we can filter by severity.
  • message: the log message that contains the details of the entry.

Using the default Winston logger gives us only two of the three properties, but
we can easily add the third by creating a custom logger. Update your main.js
file as shown below:

const winston = require('winston');

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.json(),
  transports: [new winston.transports.Console()],
});

logger.info('Info message');

Three basic things to configure on a custom logger are the minimum log level,
the format of the log messages, and where the logs should be outputted. This
logger above does not behave too differently from the default logger at the
moment, but we can easily customize it further.

For example, let’s add the missing timestamp field on all log entries. The way
to do this is by creating a custom format that combines the timestamp() and
json() formats as shown below:

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.json()
  ),
  transports: [new winston.transports.Console()],
});

logger.info('Info message');

After configuring the logger as shown above, a timestamp field will be
included in each entry:

{"level":"info","message":"Info message","timestamp":"2022-01-22T08:24:44.305Z"}

You can also configure the format of the datetime value in the timestamp filed
by passing an object to the timestamp() method as shown below. The string
accepted by the format property must be one that can be parsed by the
fecha module.

winston.format.timestamp({
  format: 'YYYY-MM-DD HH:mm:ss',
})

This yields the following output:

{"level":"info","message":"Info message","timestamp":"2022-01-23 13:46:35"}

Writing good log messages

The way messages are crafted is essential to good logging practices. The whole
point of logging is to help you understand what is happening in your
application, so it’s necessary to adequately describe the details of each entry
using detailed and concise language so that your logs don’t turn out to be
useless when you need them the most. Some examples of bad log messages include
the following:

Something happened
Transaction failed
Couldn't open file
Failed to load resource
Task failed successfully

Here are examples of better log messages:

Failed to open file 'abc.pdf': no such file or directory
Cache hit for image '59AIGo0TMgo'
Transaction 3628122 failed: cc number is invalid

Adding context to your log entries

Another important way to furnish your log entries with useful details is by
adding extra fields to each JSON object aside from the three already discussed.
A good starting point for the data points that you can add to your logs include
the following:

  • HTTP request data such as the route path or verb.
  • IP addresses.
  • Session identifiers.
  • Order or transaction IDs.
  • Exception details.

You can do so by passing an object as the second argument to each logging
method:

logger.info('Starting all recurring tasks', {
  tag: 'starting_recurring_tasks',
  id: 'TaskManager-1234729',
  module: 'RecurringTaskManager',
});

This yields the following output:

{"id":"TaskManager-1234729","level":"info","message":"Starting all recurring tasks","module":"RecurringTaskManager","tag":"starting_recurring_tasks","timestamp":"2022-01-23 14:51:17"}

If you add context to your log entries in this manner, you won’t need to repeat
the information in the message itself. This also makes it easy to filter your
logs, or to find a specific entry based on some criteria.

Storing logs in files

Logging to the console may be good enough in development, but it’s important to
record the entries into a more permanent location when deploying to production.
Winston provides a
File transport
to help you direct entries to a file. You can use it via the transports
property as shown below

const logger = winston.createLogger({
  levels: logLevels,
  level: 'trace',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.json()
  ),
  transports: [new winston.transports.File({ filename: 'combined.log' })],
});

This replaces the Console transport with the File transport so all emitted
entries will now be placed in the combined.log file. You can log to more than
one transport at once so you can log to both the console and a file using the
snippet below:

transports: [
  new winston.transports.Console(),
  new winston.transports.File({ filename: 'combined.log' }),
]

To prevent a log file from getting too big, you should rotate them through a
transport like the
winston-daily-rotate-file.
You can also use an external tool like
logrotate if you’re deploying to a
Linux-based operating system.

Aggregating your logs

Once you’re used to writing and reading logs, you’ll want to aggregate them in a
specialized log management tool. This helps you centralize your logs in one
place, and filter them to debug an issue or gather insights from them in various
ways. You can even discover usage patterns that could come in handy when
debugging specific issues, or create alerts to get notified when a specific
event occurs.

Logtail is a specialized log management tool
that integrates perfectly with several Node.js logging frameworks. To use it
with Winston, you’ll need to install the
@logtail/node and
@logtail/winston packages:

npm install @logtail/node @logtail/winston

Afterward, you can set Logtail as one of the transport options on your Winston
logger and log as normal. Note that you’ll need to
sign up for Logtail to retrieve your source
token. Ensure to replace the <your_source_token> placeholder below with this
token string.

const winston = require('winston');
const { Logtail } = require('@logtail/node');
const { LogtailTransport } = require('@logtail/winston');

// Create a Logtail client
const logtail = new Logtail('<your_source_token>');

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.json()
  ),
  transports: [new winston.transports.Console(), new LogtailTransport(logtail)],
});

logger.info('Info message');

After executing the above snippet, you should see the following output in the
Live tail section on the Logtail dashboard.

For more information on how Logtail integrates with Node.js applications, please
see the full documentation.

Summary

  • Logging is a necessary task in application development and it should be taken
    seriously.
  • Due to the deficiencies of the Node.js console module, it is recommended
    that a suitable logging framework is employed for this task.
  • Structured logging is key for automated processing (such as for alerting or
    auditing).
  • Use JSON format for log entries to maintain human and machine readability.
  • Always log at the appropriate level and turn off superfluous levels in
    production.
  • Ensure all log entries have a timestamp, log level, and message.
  • Improve your log entries with contextual information.
  • Use a log management solution, such as Logtail, to aggregate and monitor your
    logs as this can help you drastically improve the speed at which issues are
    resolved.

Conclusion and next steps

We hope this article has helped you learn enough to get started with logging in
your Node.js applications. As the title suggests, this is only the starting
point of your logging journey, so feel free to do some more research on this
topic as needed. We also have specialized guides that provide more detail on
everything you can do with logging frameworks like
Winston
and Pino, and also some Node.js logging best practices to follow so
ensure to check those out as well.

Thanks for reading, and happy coding!

Centralize all your logs into one place.

Analyze, correlate and filter logs with SQL.

Create actionable

dashboards.

Share and comment with built-in collaboration.

Got an article suggestion?
Let us know

Share on Twitter

Share on Facebook

Share via e-mail

Next article

A Complete Guide to Pino Logging in Node.js

This tutorial will guide you through creating a production-ready logging system for your Node.js application using Pino

Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Первая задача, с которой чаще всего сталкиваются разработчики, начинающие программировать на JavaScript, – как регистрировать события в журнале консоли с помощью метода console.log. В поисках информации по отладке кода на JavaScript вы найдёте сотни статей в блогах, а также инструкций на StackOverflow, советующих «просто» вывести данные в консоль через метод console.log. Это настолько распространённая практика, что пришлось ввести правила для контроля качества кода, подобные no-console, чтобы не оставлять случайные записи из журнала в коде для продакшена. Но что делать, если нужно специально зарегистрировать какое-нибудь событие, чтобы предоставить дополнительную информацию?

В этой статье рассматриваются различные ситуации, в которых требуется вести логи; показывается разница между методами console.log и console.error в Node.js и демонстрируется, как передать функцию логирования библиотекам, не перегружая пользовательскую консоль.

Теоретические основы работы с Node.js

Методы console.log и console.error можно использовать как в браузере, так и в Node.js. Тем не менее, при использовании Node.js нужно помнить одну важную вещь. Если создать следующий код в Node.js, используя файл под названием index.js,

а затем выполнить его в терминале с помощью node index.js, то результаты выполнения команд будут располагаться один над другим:

Несмотря на то, что они кажутся похожими, система обрабатывает их по-разному. Если посмотреть раздел о работе console в документации Node.js, то окажется, что console.log выводит результат через stdout, а console.error – через stderr.

Каждый процесс может работать с тремя потоками (stream) по умолчанию: stdin, stdout и stderr. Поток stdin обрабатывает ввод для процесса, например нажатия кнопок или перенаправленный вывод (об этом чуть ниже). Стандартный поток вывода stdout предназначен для вывода данных приложения. Наконец, стандартный поток ошибок stderr предназначен для вывода сообщений об ошибках. Если нужно разобраться, для чего предназначен stderr и в каких случаях его использовать, прочитайте эту статью.

Если вкратце, то с его помощью можно использовать операторы перенаправления (>) и конвейера (|) для работы с ошибками и диагностической информацией отдельно от фактических результатов работы приложения. Если оператор > позволяет перенаправить вывод результата выполнения команды в файл, то с помощью оператора 2> можно перенаправить вывод потока ошибок stderr в файл. Например, эта команда отправит Hello there в файл hello.log, а Bye bye в файл error.log.

Когда необходимо записывать события в журнал?

Теперь, рассмотрев технические аспекты, лежащие в основе записи в журнал, перейдём к различным сценариям, в которых необходимо регистрировать события. Обычно эти сценарии относятся к одной из нескольких категорий:

  • быстрая отладка при неожиданном поведении во время разработки;
  • ведение журнала на базе браузера для анализа и диагностики;
  • ведение журналов для серверных приложений для регистрации входящих запросов и возможные ошибки;
  • ведение дополнительных журналов отладки для библиотек, чтобы помогать пользователям в решении проблем;
  • ведение журнала для выходных данных интерфейса командной строки, чтобы выводить сообщения в консоли о ходе выполнения операции, подтверждениях и ошибках.

В этой статье рассматриваются только три последних сценария на базе Node.js.

Ведение журналов для серверных приложений

Существует несколько причин для логирования событий, происходящих на сервере. Например, логирование входящих запросов позволяет получить статистические данные о том, как часто пользователи сталкиваются с ошибкой 404, что может быть этому причиной или какое клиентское приложение User-Agent используется. Также можно узнать время возникновения ошибки и её причину.

Для того чтобы поэкспериментировать с материалом, приведённым в этой части статьи, нужно создать новый каталог для проекта. В каталоге проекта создаём index.js для кода, который будет использоваться, и выполняем следующие команды для запуска проекта и установки express:

Настраиваем сервер с межплатформенным ПО, который будет регистрировать каждый запрос в консоли с помощью метода console.log. Помещаем следующие строки в файл index.js:

Здесь используется console.log('%O', req) для регистрации целого объекта в журнале. С точки зрения внутренней структуры метод console.log применяет util.format, который кроме %O поддерживает и другие метки-заполнители. Информацию о них можно найти в документации Node.js.

При выполнении node index.js для запуска сервера и переходе на localhost:3000 в консоли отображается много ненужной информации:

Если вместо этого использовать console.log('%s', req), чтобы не выводить объект полностью, много информации получить не удастся:

Можно написать собственную функцию логирования, которая будет выводить только нужные данные, однако сначала следует определиться, какая именно информация необходима. Несмотря на то, что в центре внимания обычно оказывается содержание сообщения, в реальности часто необходимо получить дополнительную информацию, которая включает:

  • метку времени – чтобы знать, когда произошли события;
  • имя компьютера/сервера – если запущена распределённая система;
  • идентификатор процесса – если запущено несколько процессов Node с помощью, например, pm2;
  • сообщение – фактическое сообщение с неким содержимым;
  • трассировку стека – если ошибка регистрируется;
  • дополнительные переменные/информацию.

Кроме того, учитывая, что в любом случае всё выводится в потоки stdout и stderr, возникает необходимость вести журнал на разных уровнях, а также конфигурировать и фильтровать записи в журнале в зависимости от уровня.

Этого можно добиться, получив доступ к разным частям процесса process и написав несколько строчек кода на JavaScript. Однако Node.js замечателен тем, что в нём уже есть экосистема npm и несколько библиотек, которые можно использовать для этих целей. К ним относятся:

  • pino;
  • winston;
  • roarr;
  • bunyan (эта библиотека не обновлялась в течение двух лет).

Часто предпочтение отдаётся pino, потому что она быстрая и обладает собственной экосистемой. Посмотрим, как pino может помочь с ведением журнала. Ещё одно преимущество этой библиотеки – пакет express-pino-logger, который позволяет регистрировать запросы.

Установим pino и express-pino-logger:

После этого обновляем файл index.js, чтобы использовать регистратор событий и межплатформенное ПО:

В этом фрагменте создали экземпляр регистратора событий logger для pino и передали его в express-pino-logger, чтобы создать новое межплатформенное ПО для регистрации событий, с которым можно вызвать app.use. Кроме того, console.log заменили при запуске сервера на logger.info и добавили logger.debug к маршруту, чтобы отображать разные уровни журнала.

Если вы перезапустите сервер, повторно выполнив node index.js, то получите на выходе другой результат, при котором каждая строка/линия будет выводиться в формате JSON. Снова переходим на localhost:3000, чтобы увидеть ещё одну новую строку в формате JSON.

Среди данных в формате JSON можно обнаружить упомянутую ранее информацию, например метку времени. Также отмечаем, что сообщение logger.debug не было выведено. Для того чтобы сделать его видимым, необходимо изменить установленный по умолчанию уровень журнала. После создания экземпляра регистрации событий logger было установлено значение process.env.LOG_LEVEL. Это означает, что можно изменить значение или принять значение info по умолчанию. Запуская LOG_LEVEL=debug node index.js, изменяем уровень журнала.

Прежде чем это сделать, необходимо решить проблему выходного формата, который не слишком удобен для восприятия в настоящий момент. Этот шаг делается намеренно. Согласно философии pino, в целях производительности необходимо перенести обработку записей в журнале в отдельный процесс, передав выходные данные (с помощью оператора |). Процесс включает перевод выходных данных в формат, более удобный для восприятия человеком, или их загрузку в облако. Эту задачу выполняют инструменты передачи под названием transports. Ознакомьтесь с документацией по инструментам transports и вы узнаете, почему ошибки в pino не выводятся через stderr.

Чтобы просмотреть более удобную для чтения версию журнала, воспользуемся инструментом pino-pretty. Запускаем в терминале:

Все записи в журнале передаются с помощью оператора | в распоряжение pino-pretty, благодаря чему «очищаются» выходные данные, которые будут содержать только важную информацию, отображённую разными цветами. Если снова запросить localhost:3000, должно появиться сообщение об отладке debug.

Для того чтобы сделать записи в журнале более читабельными или преобразовать их, существует множество инструментов передачи. Их даже можно отобразить с помощью эмодзи, используя pino-colada. Эти инструменты будут полезны для разработки в локальных масштабах. При работе сервера в продакшене может появиться необходимость передавать данные журнала с помощью другого инструмента, записывать их на диск с помощью > для последующей обработки или делать две операции одновременно, используя определённую команду, например tee.

В документе также говорится о ротации файлов журнала, фильтрации и записи данных журнала в другие файлы.

Ведение журналов для библиотек

Изучив способы эффективной организации ведения журналов для серверных приложений, можно использовать ту же технологию для собственных библиотек.

Проблема в том, что в случае с библиотекой может понадобиться вести журнал в целях отладки, не нагружая при этом приложения клиента. Наоборот, клиент должен иметь возможность активировать журнал, если необходимо произвести отладку. По умолчанию библиотека не должна производить записи выходных данных, предоставив это право пользователю.

Хорошим примером этого является фреймворк express. Во внутренней структуре фреймворка express происходит много процессов, что может вызвать интерес к его более глубокому изучению во время отладки приложения. В документации для фреймворка express сказано, что к началу команды можно добавить DEBUG=express:* следующим образом:

Если применить эту команду к существующему приложению, можно увидеть множество дополнительных выходных данных, которые помогут при отладке:

Эту информацию нельзя увидеть, если не активирован журнал отладки. Для этого существует пакет debug. С его помощью можно писать сообщения в «пространстве имён», и если пользователь библиотеки включит это пространство имён или подстановочный знак, который с ним совпадает, в их переменную среды DEBUG, то сообщения будут отображаться. Сначала нужно установить библиотеку debug:

Создайте новый файл под названием random-id.js, который будет моделировать работу библиотеки, и поместите в него следующий код:

В результате будет создан новый регистратор событий debug с пространством имён mylib:randomid, в котором затем будут зарегистрированы два сообщения. Используем его в index.js из предыдущего раздела:

Если вновь запустить сервер, добавив в этот раз DEBUG=mylib:randomid node index.js, то отобразятся записи в журнале отладки для нашей «библиотеки»:

Если пользователи библиотеки захотят поместить информацию об отладке в записи журнала pino, они могут использовать библиотеку под названием pino-debug, созданную командой pino для корректного форматирования этих записей.

Устанавливаем библиотеку:

Перед первым использованием debug необходимо инициализировать pino-debug. Самый простой способ сделать это – использовать флаги -r или --require, чтобы запросить модуль перед запуском скрипта. Перезапускаем сервер, используя команду (при условии, что установлена pino-colada):

В результате записи журнала отладки библиотеки отобразятся так же, как и в журнале приложения:

Выходные данные интерфейса командной строки (CLI)

Последний случай, который рассматривается в этой статье, касается ведения журнала для интерфейса командной строки. Предпочтительно, чтобы журнал, регистрирующий события, связанные с логикой программы, вёлся отдельно от журнала для регистрации данных интерфейса командной строки. Для записи любых событий, связанных с логикой программы, нужно использовать определённую библиотеку, например debug. В этом случае можно повторно использовать логику программы, не будучи ограниченным одним сценарием использования интерфейса командной строки.

Создавая интерфейс командной строки с помощью Node.js, можно добавить различные цвета, блоки с изменяемым значением или инструменты форматирования, чтобы придать интерфейсу визуально привлекательный вид. Однако при этом нужно держать в уме несколько сценариев.

По одному из них интерфейс может использоваться в контексте системы непрерывной интеграции (CI), и в этом случае лучше отказаться от цветового форматирования и визуально перегруженного представления результатов. В некоторых системах непрерывной интеграции установлен флаг CI. Удостовериться, что вы находитесь в системе непрерывной интеграции, можно с помощью пакета is-ci, который поддерживает несколько таких систем.

Некоторые библиотеки, например chalk, определяют системы непрерывной интеграции и отменяют вывод в консоль цветного текста. Давайте посмотрим, как это работает.

Установите chalk с помощью команды npm install chalk и создайте файл под названием cli.js. Поместите в файл следующие строки:

Теперь, если выполнить этот скрипт с помощью node cli.js, результаты будут представлены с использованием разных цветов:

Но если выполнить скрипт с помощью CI=true node cli.js, цветовое форматирование текстов будет отменено:

В другом сценарии, о котором стоит помнить, stdout запущен в режиме терминала, т.е. данные выводятся на терминал. В этом случае результаты можно красиво отобразить с помощью boxen. В противном случае выходные данные, скорее всего, будут перенаправлены в файл или куда-то ещё.

Проверить работу потоков stdin, stdout или stderr в режиме терминала можно, посмотрев на атрибут isTTY соответствующего потока. Например, process.stdout.isTTY. TTY означает «телетайп» (teletypewriter) и в данном случае предназначен специально для терминала.

Значения могут различаться для каждого из трёх потоков в зависимости от того, как были запущены процессы Node.js. Подробную информацию об этом можно найти в документации Node.js, в разделе «Ввод/вывод процессов».

Посмотрим, как значение process.stdout.isTTY различается в разных ситуациях. Обновляем файл cli.js, чтобы проверить его:

Теперь запускаем node cli.js в терминале и видим слово true, после которого цветным шрифтом отображается сообщение:

После этого повторно выполняем команду, но перенаправляем выходные данные в файл, а затем просматриваем содержимое:

В этот раз в терминале появилось слово undefined, за которым следует сообщение, отображённое бесцветным шрифтом, поскольку перенаправление потока stdout вывело его из режима терминала. Здесь chalk использует инструмент supports-color, который с точки зрения внутренней структуры проверяет isTTY соответствующего потока.

Такие инструменты, как chalk, выполняют эти действия самостоятельно. Однако, разрабатывая интерфейс командной строки, нужно всегда помнить о ситуациях, когда интерфейс работает в системе непрерывной интеграции или осуществляется перенаправление выходных данных. Эти инструменты помогают использовать интерфейс командной строки на более высоком уровне. Например, данные в терминале можно организовать более структурировано и, если isTTY имеет значение undefined, переключиться на более простой способ анализа.

Заключение

Начать использовать JavaScript и внести первую строчку в журнал консоли с помощью console.log довольно просто. Но перед тем как развернуть код в продакшене, следует учесть несколько аспектов использования журнала. Данная статья является лишь введением в различные способы и решения, применяемые при организации журнала событий. Она не содержит всего, что вам необходимо знать. Поэтому рекомендуется обращать внимание на удачные проекты с открытым кодом и отслеживать, каким образом в них решена проблема логирования и какие инструменты используются при этом. А теперь попробуйте сами вести логи без вывода данных в консоль.

Если вы знаете другие инструменты, о которых стоит упомянуть, напишите о них в комментариях.

This post was updated on 18 April 2023 to include information about logging with Winston and AppSignal.

Good logging practices are crucial for monitoring and troubleshooting your
Node.js servers. They help you track errors in the application, discover
performance optimization opportunities, and carry out different kinds of
analysis on the system (such as in the case of outages or security issues) to make critical product decisions.

Even though logging is an essential aspect of building robust web applications,
it’s often ignored or glossed over in discussions about development best
practices. Knowing what and how to log is tricky because it’s often difficult
to understand what information you’ll need during troubleshooting.

Since we understand how essential logging is, we need to
practice good logging habits. We should accumulate logs that make it
easier to diagnose the root cause of problems and solve potential issues before
they impact end-users.

This article will outline some best practices to
follow when writing logs in a Node.js application.

1. Use a Node.js Logging Library

Node.js developers tend to rely on the runtime’s console methods (such as
console.log()) to log events since it is built into the runtime, and offers a
familiar API similar to the JavaScript console mechanism provided by
web browsers.

While console.log() has its uses, it is not an adequate
solution for implementing logging in a production application. It lacks
support for features and configuration options that are considered essential to
a good logging setup. For example, console methods do not support log levels
like warn, error or debug, despite the provision of methods like
console.warn(), console.error(), and console.debug(), respectively. Those
are simply functions that print to the standard output or standard error
without indicating log severity.

A good logging library provides a robust feature set that makes it easier to
centralize, format, and distribute the logs to fit your needs. For example, a
typical logging framework will provide various options for where to output log
data (such as the terminal, filesystem, or a database), while also supporting the
ability to send the logs over HTTP if you want to transport your log
entries to a log management service.

There are three major concerns for choosing a suitable logging library:
recording, formatting, and storing messages. You need to make sure
that your library of choice addresses all three concerns in a satisfactory
manner. Another critical consideration for selecting a logging library is
performance. Since the logger will be used a lot throughout the codebase,
it can harm your application’s runtime performance.
Therefore, you should also investigate the performance characteristics of a
library, and see how it compares to alternatives.

In the Node.js ecosystem, there are several popular options to consider. Most of
them offer similar features, but they also have their differences — so it’s
vital that you try them out yourself and see which one caters to your use case
the best:

  • Winston — The most popular logging library, with support for multiple transports. This allows you to easily configure your preferred storage location for your logs.
  • Pino — Pino’s biggest attraction is
    its speed. It claims to be up to five times
    faster than alternatives, in many cases.
  • Bunyan — Another feature-rich logging
    framework that outputs in JSON by default and provides a CLI tool for viewing
    your logs.
  • Roarr — Roarr is a different kind of
    logger that works in Node.js and the browser.

For the rest of the article, we’ll use winston to demonstrate the
different concepts. This is not necessarily an endorsement of winston; it’s only
being used here because it’s the most popular logging framework for Node.js. You
can follow along by installing winston in your project though npm:

2. Use the Correct Log Levels

If you’ve spent a reasonable amount of time in the software industry, especially
supporting production applications, you may have observed the use of log levels,
at least in passing. They provide a way to differentiate between the types of
events in a system and add context to how important each event
is. If you correctly utilize log levels in your application, it will be easy to
distinguish between critical events that need to be immediately addressed versus purely informative events.

Although logging systems give different names to severity levels, the concepts remain largely the same. Here are the most common log
levels that you are likely to encounter, regardless of the logging framework you
choose (in decreasing order of severity):

  • FATAL: Used to represent a catastrophic situation — your application
    cannot recover. Logging at this level usually signifies the end of the
    program.
  • ERROR: Represents an error condition in the system that happens to
    halt a specific operation, but not the overall system. You can log at this level
    when a third-party API is returning errors.
  • WARN: Indicates runtime conditions that are undesirable or unusual,
    but not necessarily errors. An example could be using a backup data source when
    the primary source is unavailable.
  • INFO: Info messages are purely informative. Events that are user-driven or
    application-specific may be logged at this level. A common use of this level is
    to log interesting runtime events, such as the startup or shutdown of a
    service.
  • DEBUG: Used to represent diagnostic information that may be needed for
    troubleshooting.
  • TRACE: Captures every possible detail about an application’s
    behavior during development.

The winston library uses the following log levels by default — with error
being the most severe and silly being the least:

If the defaults do not suit your needs, you can change them while initializing a
custom logger. For example, you can instead use the log levels discussed above.

When you want to log a message, you can reference the desired level directly on
the custom logger, as shown below:

Winston also allows you to define a level property on the logger and on
each transport which specifies the maximum level of messages that should
be logged. For example, you can run your programs at info level by default,
then switch to debug or trace level when you need to troubleshoot an issue
or deploy to a testing environment. You should control this setting
through an environmental variable.

3. Use Structured Logging

When defining how your log messages look, the priority should be
to make your log entries easy to read for both humans and machines.

One of the
primary goals of logging is to enable post-mortem debugging, and this will
involve reading log entries to reconstruct the steps that led to an
event in the system. Having human-readable log entries will make this thankless
task much easier for developers and sysadmins. It’s also important to use a
structured format that is easy to parse by machines. This allows for some automated processing on the logs (such as for alerting or auditing
purposes).

JSON is a universal favorite for structured log entries because it is ubiquitous
and easily readable by humans. It is also highly machine-readable and easily
converted to other formats, even when working with other programming languages.
When logging in JSON, it’s necessary to use a standard schema so that the
semantics of each field is clearly defined. This also makes it easy to find what
you’re looking for when analyzing log entries.

Winston outputs a JSON string by default with two fields: message and level.
The former contains the text that is being logged, while the latter signifies
the log level. Customizing the output is straightforward through
winston.format, which uses
logform to implement its
different formats. For example, if you wanted to add a timestamp to each log
entry, you can do so by combining the timestamp and json formats as shown
below:

This will produce log entries in the following format:

Ensure you read the logform docs
to learn about the predefined formats and how to create your own custom
formats.

4. Write Descriptive Messages

Log entries should adequately describe the events that they represent. Each
message should be unique to the situation and should clearly explain the event that
occurred at that point. In the event of an emergency, your log entries may be the
only source of information to help you understand what happened, so it’s
important to get this aspect of logging right!

Here’s an example of an inadequate log entry being used to communicate a
request failure:

The message above doesn’t provide any insights into:

  • The specific request that failed
  • The reason for its failure
  • The length of time before the request is retried

We can probably find out the answers to some of these questions by
looking at other places such as other log entries or even the code itself. However,
it is better to make the log entry more valuable on its own, through a more
descriptive message:

This second message is much better because it provides adequate information
about the failed request, including the status code and response message, and
also notes that the request will be retried in 60 seconds. If all your
messages are just as descriptive, you’ll have a more pleasant time when
attempting to make sense of your logs. Other examples of good log messages
include the following:

When composing log messages, include all the relevant details
pertaining to the event without being unnecessarily verbose. This will prevent other log
readers (which could include your future self) from being overwhelmed with too much
information to sift through. Your log
messages should also be able to stand on their own; don’t rely on a previous
message’s content to provide the context for a later entry.

5. Add the Right Amount of Context to Your Logs

Besides writing a descriptive log message, you also need to include the right
amount of context in the log entry. Context makes it possible to quickly reconstruct the actions leading up to an event. Add basic information to the log, such as the timestamp of the event and the method where it occurred (or a stack trace, in the case of errors). You should also add data points relevant to the flow of the operation that triggered the event. These data points may be
generated at different parts of the operation flow and aggregated at the point
of logging.

In the context of a billing service, log entries generated by the system could include several data points, including:

  • Session identifiers
  • Usernames and IDs
  • Product or transaction identifiers
  • The current page that the user is on

You can use each of the above data points to track a user’s flow through an
entire checkout process. If an important event occurs, the available data will
be automatically appended to the log output, and it will be possible to
identify:

  • the circumstances that led to the event (such as the user who experienced the event)
  • the page where it occurred
  • the transaction and
    product id that triggered the event.

These data points also
let you filter log entries based on a common identifier such as the user id or
product id.

Winston provides the ability to add global metadata (such as the component or
service where an event occurred) to every generated log entry. In a complex
application, this information in your logs is helpful for troubleshooting issues
because it immediately directs you to the point of failure.

You can configure
this when creating the logger for the component or service:

A service field will be included in all logs created by the logger object:

To add metadata to individual entries, you need to create a context or
metadata object that you can pass around throughout the flow of an operation so
that the data is accessible at logging points. You can also utilize the concept
of child loggers
to add metadata at logging points:

6. Avoid Logging Sensitive Information

Regardless of whether you’re in an industry with strict rules around compliance (such as
healthcare or finance) or not, it’s important to avoid including sensitive
information in your logs.

Sensitive information includes social security numbers, addresses,
passwords, credit card details, access tokens, and similar data types. Since
log messages are often stored in plain text, such data will be exposed if the
logs fall into the wrong hands. You also have to ensure you are not falling
afoul of regulations that apply to countries where your product is
operational (such as GDPR) by
recording certain pieces of information.

You can avoid accidentally leaking sensitive data in your logs by minimizing
which parts of the system work with that data. For instance, credit card details
should only be seen by the billing component of your system, and sensitive data
should be kept out of URLs — redacted where possible.

Although this isn’t a foolproof solution, you can also use a blocklist to prevent specific fields from ever making it into the logs.

7. Log for Auditing and Profiling Reasons

We primarily use logs to diagnose issues and find the root
cause of bugs. However, logs can also prove invaluable when auditing or profiling a
system, or perhaps to generate interesting statistics about system behavior.

For example, you can log details of what users are doing on the system (like
user sign-ins, the resources they created or accessed, etc.). In the absence
of specialized tools, you can also use logging to profile your code by reporting
how long an operation took or how many times a function was executed. The
insights gleaned from this data can help you improve the performance of your
application’s hotspots.

Winston provides a simple profiling mechanism that you
can take advantage of:

A durationMs field will be included in the output:

8. Automatically Log Uncaught Exceptions and Unhandled Promise Rejections

When you encounter an uncaught
exception or unhandled promise rejection, it is always considered good practice to crash the program. Use a process
manager like PM2 to automatically restart the process and restore the program to a clean state.

To understand
why such an event has occurred, it’s also necessary to log the details of the
exception or promise rejection before exiting. Winston provides handlers for
both situations, which may be configured on a logger instance:

In the example above, uncaught exceptions will be logged into an
exceptions.log file, while unhandled rejections will be logged into a
rejections.log file. Entries will automatically include the full stack trace
as well as information about process arguments and memory usage associated with
the exception, giving you all the details you need to find the root of the
problem.

Learn how to configure Winston’s unhandled
exception handler.

9. Centralize and Monitor Your Logs with Winston and AppSignal

Managing a Node.js application often involves handling vast amounts of logs
generated by the system. As your application grows, these logs can become
increasingly difficult to track and analyze, making it crucial to centralize
them in one unified location. By consolidating logs from various sources, you
can streamline the monitoring and troubleshooting process, providing better
visibility into your application’s performance and ensuring a more reliable user
experience.

Centralizing logs not only simplifies management, but also offers several key
benefits:

  • Easier log analysis: Combining logs from multiple instances and sources
    allows for more efficient and comprehensive analysis, making it easier to
    identify patterns, trends, and potential issues.

  • Faster troubleshooting: When issues arise, centralized logs make it easier
    to pinpoint the root cause, reducing the time spent on debugging and improving
    the speed at which you can resolve problems.

  • Improved performance monitoring: With all logs in one place, you can
    easily monitor the overall health of your application, enabling you to
    identify performance bottlenecks and optimize your code accordingly.

  • Alerting and notifications: You can set up alerts based on specific
    conditions, ensuring that you’re immediately notified of critical issues.

  • Regulatory compliance: For organizations subject to regulatory
    requirements, centralizing logs can help meet compliance standards by
    providing a clear audit trail and ensuring data integrity.

Winston can be configured to integrate with a
log management platform like AppSignal
to provide all the above benefits and more. All you need to do is set up the
AppSignal transport for Winston as shown below:

You’ll need to install the
AppSignal SDK for Node.js,
create a log source via the
AppSignal dashboard, and initialize the
AppSignal client instance with the provided API Key. Once you’re all set up,
your application logs will be streamed directly to AppSignal.

Here’s an example of how your logs might look in AppSignal:

Log details in AppSignal

Learn more about AppSignal’s logging integrations for Node.js.

Node.js Logging Tips: Next Steps

In this article, we’ve covered some invaluable practical tips to help you write
more useful log entries. Next, you should add logging to your Node.js server and configure it while
keeping the above best practices in mind.

If you have any additional tips
regarding logging in Node.js that were not discussed in this post, feel free to
share them with me on Twitter.

Thanks for reading, and happy coding!

P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.

P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.

​​Editor’s note: This article was updated on 14 April 2022 to reflect the most recent information for Node.js logging best practices, including information about Pino and Bunyan and health monitoring tools.

Logging is one of the most important and useful steps a developer can take when working in Node.js. In fact, it’s one of the most useful steps a developer can take regardless of their runtime environment! Logging helps developers know what it is that their code is actually doing, and can help developers save hours of debugging work.

One of the key purposes of logging is to save information about each flow or runtime. This is the same information that a developer uses to understand their application. For example, if there is a bug in the database section of an app, a log will point to where the failure takes place, helping you to identify the cause of the problem.

In this article, we’ll cover some of the best practices for logging in Node.js and see why logging is a good idea. I will also share my recommendations for the best monitoring tools for Node.js applications.

Jump ahead:

  • Start with console.log
  • Move to a log library
    • Winston
    • Pino
    • Bunyan
  • Log HTTP requests in Node with Morgan
  • Configure Winston with Morgan
  • Define log levels
  • Use logs with a log management system
  • Health monitoring tools

What data should be logged?

Before we review best practices for logging in Node, it’s important to remember that not all information should be logged. For example, personal data, like user information, and delicate data, like passwords and credit card information, should not be logged. Also worth noting is that the tool you use to log is far less important than your approach to logging.

In the following sections, we will explain the most effective approaches as a part of logging best practices in Node.js.

Start with console.log

Some would argue that it should be regular practice when building a side project to have console.log around your applications. Other console methods include console.group, console.info, and less common methods like console.error:

console.error('Error!');

When running console.log, you may notice a negligible decrease in performance. To avoid negatively impacting performance, I recommend switching to a logging library when your project begins to expand.

Move to a log library

Logging libraries help developers create and manage log events, which can increase the overall efficiency and functionality of your application. Some of the most popular logging libraries for Node are Winston, Pino, Bunyan, and Log4js.

While you should almost always use a standard console.log, a logging library can be more functional and help avoid decreases in app performance.

Winston

If you want to store your error log in a remote location or separate database, Winston might be the best choice because it supports multiple transports. Alternatively, Log4js supports log streams, like logging to a console, and log aggregators like Loggly (using an appender).

Below is an example of how to set up a logging library using Winston:

const winston = require('winston');
const config = require('./config');

const enumerateErrorFormat = winston.format((info) => {
  if (info instanceof Error) {
    Object.assign(info, { message: info.stack });
  }
  return info;
});

const logger = winston.createLogger({
  level: config.env === 'development' ? 'debug' : 'info',
  format: winston.format.combine(
    enumerateErrorFormat(),
    config.env === 'development' ? winston.format.colorize() : winston.format.uncolorize(),
    winston.format.splat(),
    winston.format.printf(({ level, message }) => `${level}: ${message}`)
  ),
  transports: [
    new winston.transports.Console({
      stderrLevels: ['error'],
    }),
  ],
});

module.exports = logger;

Pino

This logging library is very popular for its low overhead and minimalism. It uses less resources for logging by using a worker thread for processing.

const pino = require('pino');

// Create a logging instance
const logger = pino({
level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
});


logger.info('Application started!');

Using Pino with web frameworks has recently been made easier. For example, Fastify comes with Pino by default, and others have a specific library for Pino. For more information on how to use Pino, refer to this article.

Bunyan

Bunyan is another fast JSON logging library that supports multiple transports and uses a CLI for filtering the logs. It has a refined method that produces what they should do. My favorite feature about Bunyan is the log snooping, which helps in debugging failures in production.

const bunyan = require('bunyan');
const log = bunyan.createLogger({name: 'myapp'});
log.info('My App');


{"name":"myapp","hostname":"banana.local","pid":40161,"level":30,"msg":"My App","time":"2022-04-04T18:24:23.851Z","v":0}

Other cool features of Bunyan are a stream system for controlling where logs are located, support for environments aside from Node.js, and that JSON objects are serialized by default.

Log HTTP requests in Node with Morgan

Another best practice is to log your HTTP request in your Node.js application. One of the most used tools to accomplish this is Morgan, which gets the server logs and systematizes them to make them more readable.

To use Morgan, simply set the format string:

const morgan = require('morgan');
app.use(morgan('dev'));

For reference, the predefined format string is:

morgan('tiny')

Below is the expected output:

Output of HTTP request in Node with Morgan.

Output of HTTP request in Node with Morgan.

Configure Winston with Morgan

If you choose to use the Winston library, then you can easily configure with Morgan:

const morgan = require('morgan');
const config = require('./config');
const logger = require('./logger');

morgan.token('message', (req, res) => res.locals.errorMessage || '');

const getIpFormat = () => (config.env === 'production' ? ':remote-addr - ' : '');
const successResponseFormat = `${getIpFormat()}:method :url :status - :response-time ms`;
const errorResponseFormat = `${getIpFormat()}:method :url :status - :response-time ms - message: :message`;

const successHandler = morgan(successResponseFormat, {
  skip: (req, res) => res.statusCode >= 400,
  stream: { write: (message) => logger.info(message.trim()) },
});

const errorHandler = morgan(errorResponseFormat, {
  skip: (req, res) => res.statusCode < 400,
  stream: { write: (message) => logger.error(message.trim()) },
});

module.exports = {
  successHandler,
  errorHandler,
};

As you can see in the above example, to configure Winston with Morgan, you just have to set up Winston to pass the output of Morgan back to it.

Define log levels

Before embarking on a build with your development team, it is very important to define your log levels in order to differentiate between log events. Managing log events in an orderly and consistent manner makes it easier to get necessary information at a glance.

There are several log levels and it is important to know them and their uses. Each log level gives a rough direction about the importance and urgency of the message:

  • Error: important events that will cause the program execution to fail
  • Warn: crucial events that should be noticed to prevent fails
  • Info: important events that detail a completed task
  • Debug: mostly used by developers

The developer should be able to see a detailed event and determine if it should be fixed immediately.

Use logs with a log management system

Depending on how big your application is, it may be helpful to pull the logs out of your application and manage them separately using a log management system.

Log management systems allow you to track and analyze logs as they happen in real time, which in turn can help improve your code. As you can see in the example below, a log management system can help you keep track of useful data including backend errors, anomalies, log sources, and production errors.

Loggly log management system display.

Loggly log management system display.

For log analysis and log management tools, I recommend Sentry, Loggly, McAfee Enterprise, Graylog, Splunk, Logmatic (acquired by Datadog at the time of writing), or Logstash.

Health monitoring tools are a good way to keep track of your server performance and identify causes of application crashes or downtime. Most health monitoring tools offer error tracking as well as alerts and general performance monitoring. Some developers find error tracking particularly frustrating in Node.js, so using a health monitoring tool can help alleviate some of those difficulties.

Below are few popular monitoring tools for Node.js:

  • PM2
  • Sematext
  • App Metrics
  • ClinicJS
  • AppSignal
  • Express Status Monitor

Conclusion

In this article, we looked at how important logging is and how it can help developers better understand their applications. We also discussed logging best practices in Node.js, including using a log library, logging HTTP requests, defining log levels, and using a log management system.

A few of the popular logging libraries like Winston, Pino, and Bunyan are actually fair to work with and lightweight meaning that they won’t be a bottleneck on your Node.js application.

While no infrastructure is completely safe or entirely error free, logging is a necessary step for developers who want to monitor production and cut down on errors.


More great articles from LogRocket:

  • Don’t miss a moment with The Replay, a curated newsletter from LogRocket
  • Learn how LogRocket’s Galileo cuts through the noise to proactively resolve issues in your app
  • Use React’s useEffect to optimize your application’s performance
  • Switch between multiple versions of Node
  • Discover how to animate your React app with AnimXYZ
  • Explore Tauri, a new framework for building binaries
  • Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

Logging can also be useful for other team members including QA, Support, and new programmers, as it saves valuable information to learn from and build on.

200’s only Monitor failed and slow network requests in production

Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third party services are successful, try LogRocket. LogRocket Network Request Monitoringhttps://logrocket.com/signup/

LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.

LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.

Logging helps developers in reducing errors and cyber-attacks. The application is designed to be dynamic. We can’t always predict how an application will react to data changes, errors, or program changes. Logging in allows us to better understand our own programs.

For all applications, an absolute logging solution is essential. A good logging system increases the application’s stability and makes it easier to maintain on the production server. Application logging is the process of gathering information about your program’s execution time.

When a user has an issue, you have no way of knowing what went wrong. You can only hope to contact the user who had the issue and have them tell you what happened.

As you can see, this is a bad strategy. This article focuses on some of the most important aspects of Node.js logging:

  1. What is Log?
  2. Why Logging is Important?
  3. Log Levels
  4. Where should I Log?
  5. Best Practices for Node.js Logging

What is Log?

Logs are events that reflect many characteristics of your application; if created correctly by the team, they are the simplest means of troubleshooting and diagnosing your application.

Debugging is done with logs by operations engineers and developers. Logs are used by product managers and UX designers to plan and develop. Marketers want to know how well various components related to advertising campaigns are doing.

Anyone with a stake in the company can benefit from the information provided by logs. However, you won’t appreciate the worth of your logs until they’ve been properly evaluated. And in order to do so, we’ll need to keep track of the correct data.

Why Logging is Important?

Previously, logging was seen as a low-priority task. People did not put forth much effort into creating significant logs. Logs, on the other hand, have become a useful piece of information as a result of technological advancements in machine learning and artificial intelligence.

Logs can be used to analyse user behaviour and the application’s general health. It analyses the major components of your application to improve performance. Measures crucial metrics including the number of errors or warnings logged, as well as your application’s overall availability.

Furthermore, logs assist you in comprehending what is occurring within your application. What happened is documented in your logs. This data is helpful if something goes wrong and you need to troubleshoot an issue. As a developer, the first thing you should do is look through the logs to determine what happened leading up to the issue. You’ll be able to see exactly what actions lead to the problem if you do it this way.

When debugging your application, logs are quite useful. Additionally, technical advances such as trend analysis enable your company to spot anomalies by examining logs, allowing it to go from reactive to proactive monitoring. You can limit the number of errors since you can correct them before they affect the user.

Log Levels

One of the best practices for efficient application logging is to use the right log level. The priority of communication is indicated by its log level. Each level is represented by an integer.

RFC5424 specifies the severity of logging levels. A decimal severity level indicator is included with each message priority. These, along with their numerical values, are included in the table below. The severity values must be between 0 and 7, inclusive.

Error Levels
  • Code 0 (Emergency): the system is unusable
  • Code 1 (Alert): action must be taken immediately
  • Code 2 (Critical): critical conditions
  • Code 3 (Error): error conditions
  • Code 4 (Warning): warning conditions
  • Code 5 (Notice): normal but significant condition
  • Code 6 (Informational): informational messages
  • Code 7 (Debug): debug-level messages

Each log level indicates the importance and urgency of the message in general. Let’s take a look at the most significant levels and how they’re used:

  • ERROR
    Something went wrong, and it needs to be looked at right away. For example, if the database is down, or if communication with other microservices has failed, or if the required input is unknown. System operators and monitoring systems are the key target audiences.
  • WARNING
    These logs are warnings that do not prevent the application from continuing; they serve as notifications when something goes wrong and a workaround is used. For example, incorrect user input, retries, and so on. These warnings should be fixed by the administrators in the future.
  • INFORMATIONAL
    Some critical messages, such as event messages that indicate when a task is completed. Consider the following scenario: «New User created with id xyz». This is the only way to keep track of progress logs.
  • DEBUG
    This level is for developers; it’s akin to logging the information you see while using a debugger or a breakpoint, such as which function was called and which parameters were supplied, and so on. It should have the current condition so that debugging and pinpointing the issue is easier.

A developer should be able to look at a detailed event and assess whether it needs to be fixed right away.

Where should I Log?

An event source pattern is an excellent example of loose coupling. An event source pattern is frequently used in a microservices architecture. It consists of a number of services that place events on a bus or in a line. Independent processes “listening” to the queue can reply to those events.

Log events are different kinds of events that can be queued. Multiple listeners can each listen to the queue and write to their own log. With this level of adaptability comes an increased level of complexity. We frequently only register a single logger to the log call. There are several options, but anything we use for logging must be able to handle the volume of log messages.

Some of the most common log locations are listed here.

1. console or stdout/stderr

The basic console.log and console.info functions in Node.js will log to stdout. stderr is where console.warn and console.error are written. These will print the output to your console by default. This will be the developer tools console on the front end. This is generally sufficient for running your Node.js backend locally.

2. Log File

If you’ve already used the console everywhere, you can route stdout and stderr to a file without touching your code. This is accomplished by using the standard redirect when launching your application:

node project.js > project.log 2>&1

This will send stdout to a file named project.log and stderr to stdout.

When would you use a file to log to?

In most cases, logging to persistent storage is preferable in production. The simplest approach to store logs is in a file. It isn’t, however, the finest long-term option.

You might wish to log into your file system when you’re running locally. When you have numerous processes running at the same time, this can be useful. It’s possible that they’re in distinct containers.

When you have a process that spans numerous processes, such as a workflow, you’ll want to bring all the logs together in either case. It will make the analysis go more smoothly. You don’t want to have to switch back and forth between different log outputs to connect the dots.

3. Log Management Tools

The ideal way to log-structured events is to send them to a log management tool. Structured logging makes data easier to correlate, find patterns, and use. Querying the data is more efficient when the logs are stored in a centralized logging database — specifically, one designed for the purpose.

Here are some log management tools that we suggest for logging:

  1. Atatus
  2. Loggly
  3. PaperTrail
  4. Sumo Logic
  5. Splunk
  6. GrayLog
  7. LogDNA

Best Practices for Node.js Logging

Node.js logging can be done in a variety of ways. To improve logging, we can use the built-in features or one of the many popular packages. The best method for logging is to pipe stdout and stderr to your desired log destination. Because debug, the main logger in node js, writes directly to process.stdout and process.stderr, this is the case.

We’ll go over the most successful techniques as part of Node.js logging best practices in the sections below.

#1 Begin with console.log

Console.log is the original node.js logging technique. console.error, console.info, and console.warn are some of the versions. Those are just extra methods on top of the main function, which is:

console.log(level, message)

You can write to console.log, but outside the code, you have limited control over where things log.

If you run console.log, you may notice a slight slowdown in performance. When your project grows larger, I recommend moving to a logging library to avoid performance issues.

#2 Move to Log Library

Logging libraries assist developers in creating and managing log events, which can improve your application’s overall efficiency and effectiveness. Winston, Bunyan, and Log4js are three of the most popular logging libraries for Node.

Comparison of Logging Libraries

We’ll go through the most significant features of three logging libraries in this section: Winston, Bunyan, and Log4js.

  • Winston Logging
    Winston is a universal logging library that allows logs to be stored in several locations. It lets you store certain log levels in a remote location. The remaining logs can be saved somewhere else, such as a database. This handy feature lets you send higher-level logs to a log aggregation tool while storing lower-level logs in a database or file storage.
  • Bunyan Logging
    Bunyan is also a logging library with a lot of features. Bunyan, like Winston, allows you to customize your transportation options.
  • Log4js Logging
    Log4js is a fork of the well-known Java logger Log4j. The package isn’t as well-known as Winston or Bunyan, but it still has some unique features. Developers can use Log4js to log to the console, a file, or a log aggregator through a TCP connection.

Winston may be the ideal option if you wish to store your error log in a distant place or separate database because it supports multiple modes of transport. Bunyan also has a CLI for filtering and supports several modes of transport. Alternatively, Log4js offers log streams and log aggregators, such as logging to a console.

Here’s an example of how to use Winston to build up a logging library:

const winston = require('winston');
const config = require('./config');
const enumerateErrorFormat = winston.format((info) => {
    if (info instanceof Error) {
        Object.assign(info, {
            message: info.stack
        });
    }
    return info;
});
const logger = winston.createLogger({
    level: config.env === 'development' ? 'debug' : 'info',
    format: winston.format.combine(
        enumerateErrorFormat(),
        config.env === 'development' ? winston.format.colorize() : winston.format.uncolorize(),
        winston.format.splat(),
        winston.format.printf(({
            level,
            message
        }) => `${level}: ${message}`)
    ),
    transports: [
        new winston.transports.Console({
            stderrLevels: ['error'],
        }),
    ],
});
module.exports = logger;

The point is that, while a standard console.log should nearly always be used, a logging library can be more useful and helps avoid application performance degradation.

#3 Log HTTP Requests with Morgan

Logging your HTTP request in your Node.js application is another great practice. Morgan node.js tool is one of the most popular tools for doing so, as it takes the server logs and organizes them to make them more readable.

To use Morgan, simply type the following into the format string:

morgan = require('morgan');
app.use(morgan('dev'));

Configure Winston with Morgan

If you prefer to use the Winston library, Morgan makes it simple to set up:

const morgan = require('morgan');
const config = require('./config');
const logger = require('./logger');
morgan.token('message', (req, res) => res.locals.errorMessage || '');
const getIpFormat = () => (config.env === 'production' ? ':remote-addr - ' : '');
const successResponseFormat = `${getIpFormat()}:method :url :status - :response-time ms`;
const errorResponseFormat = `${getIpFormat()}:method :url :status - :response-time ms - message: :message`;
const successHandler = morgan(successResponseFormat, {
    skip: (req, res) => res.statusCode >= 400,
    stream: {
        write: (message) => logger.info(message.trim())
    },
});
const errorHandler = morgan(errorResponseFormat, {
    skip: (req, res) => res.statusCode < 400,
    stream: {
        write: (message) => logger.error(message.trim())
    },
});
module.exports = {
    successHandler,
    errorHandler,
};

To configure Winston with Morgan, simply set up Winston to transmit the output of Morgan back to it, as shown in the example above.

#4 Define Log Level

It’s critical to specify your log levels before starting a build with your development team so that you can distinguish between log events. It’s easier to gather important information at a glance when log events are managed in an ordered and consistent manner.

It’s essential that everyone in the team agrees on when to utilize which log level. A WARNING level, for example, could be assigned to a failed login attempt. A failed login attempt, on the other hand, could indicate that something is wrong or that someone is attempting to break into an account.

You can lose out on vital information in your production environment if your team uses multiple log levels for similar events.

Additionally, decide on a specific log style format. In every log message, you might want to add the user ID. Before you begin creating your code, define these standards. It’s easier to study logs when they’re in a standard format.

#5 Server Logs

Server logs are a crucial component of application monitoring. Whatever hosting environment you use, you must keep a watch on this. Sending all of the data to a single location is the optimal solution. Application logs, database logs, and server logs should all be viewed together because they all have an influence on your users.

Let’s not forget about the network. Although most of us have little influence over the network, it’s worth examining how you might log network concerns. Errors can sometimes be traced back to the network. We can chase our tails seeking a cause after an incident if we don’t have insight into network issues.

Conclusion

While no infrastructure is perfectly secure or error-free, logging is an essential step for developers who wish to keep an eye on production and reduce errors. Other team members, such as QA, Support, and new programmers, can benefit from logging because it saves critical information to learn from and grow on.

Node.js logging is an important part of any node.js application’s process. Almost everyone involved in the application is affected by Node.js logging. Logs are essential to the feedback loop that promotes success, from users to executives to designers and engineers. We should keep track of everything, from minor details to major events.


Atatus
Log Monitoring and Management

Atatus is delivered as a fully managed cloud service with minimal setup at any scale that requires no maintenance. It monitors logs from all of your systems and applications into a centralized and easy-to-navigate user interface, allowing you to troubleshoot faster.

We give a cost-effective, scalable method to centralized node.js logging, so you can obtain total insight across your complex architecture. To cut through the noise and focus on the key events that matter, you can search the logs by hostname, service, source, messages, and more. When you can correlate log events with

Application Performance Monitoring(APM)

slow traces and errors, troubleshooting becomes easy.

Try your 14-day free trial of Atatus.

  • Логико семантические ошибки примеры
  • Логическая ошибка манхва арты
  • Логико речевая ошибка ложное основание
  • Логическая ошибка манга читать мангахаб
  • Логика управления ошибками управление проектами