diff --git a/x-pack/docs/en/security/authentication/active-directory-realm.asciidoc b/x-pack/docs/en/security/authentication/active-directory-realm.asciidoc new file mode 100644 index 0000000000000..0f75cb3f7a445 --- /dev/null +++ b/x-pack/docs/en/security/authentication/active-directory-realm.asciidoc @@ -0,0 +1,80 @@ +[role="xpack"] +[[active-directory-realm]] +=== Active Directory user authentication + +You can configure {stack} {security-features} to communicate with Active +Directory to authenticate users. To integrate with Active Directory, you +configure an `active_directory` realm and map Active Directory users and groups +to roles in the <>. + +See {ref}/configuring-ad-realm.html[Configuring an active directory realm]. + +The {security-features} use LDAP to communicate with Active Directory, so +`active_directory` realms are similar to <>. Like +LDAP directories, Active Directory stores users and groups hierarchically. The +directory's hierarchy is built from containers such as the _organizational unit_ +(`ou`), _organization_ (`o`), and _domain controller_ (`dc`). + +The path to an entry is a _Distinguished Name_ (DN) that uniquely identifies a +user or group. User and group names typically have attributes such as a +_common name_ (`cn`) or _unique ID_ (`uid`). A DN is specified as a string, for +example `"cn=admin,dc=example,dc=com"` (white spaces are ignored). + +The {security-features} supports only Active Directory security groups. You +cannot map distribution groups to roles. + +NOTE: When you use Active Directory for authentication, the username entered by + the user is expected to match the `sAMAccountName` or `userPrincipalName`, + not the common name. + +The Active Directory realm authenticates users using an LDAP bind request. After +authenticating the user, the realm then searches to find the user's entry in +Active Directory. Once the user has been found, the Active Directory realm then +retrieves the user's group memberships from the `tokenGroups` attribute on the +user's entry in Active Directory. + +[[ad-load-balancing]] +==== Load balancing and failover +The `load_balance.type` setting can be used at the realm level to configure how +the {security-features} should interact with multiple Active Directory servers. +Two modes of operation are supported: failover and load balancing. + +See +{ref}/security-settings.html#load-balancing[Load balancing and failover settings]. + +[[ad-settings]] +==== Active Directory realm settings + +See +{ref}/security-settings.html#ref-ad-settings[Active Directory realm settings]. + +[[mapping-roles-ad]] +==== Mapping Active Directory users and groups to roles + +See {ref}/configuring-ad-realm.html[Configuring an Active Directory realm]. + +[[ad-user-metadata]] +==== User metadata in Active Directory realms +When a user is authenticated via an Active Directory realm, the following +properties are populated in the user's _metadata_: + +|======================= +| Field | Description +| `ldap_dn` | The distinguished name of the user. +| `ldap_groups` | The distinguished name of each of the groups that were + resolved for the user (regardless of whether those + groups were mapped to a role). +|======================= + +This metadata is returned in the +{ref}/security-api-authenticate.html[authenticate API] and can be used with +<> in roles. + +Additional metadata can be extracted from the Active Directory server by configuring +the `metadata` setting on the Active Directory realm. + +[[active-directory-ssl]] +==== Setting up SSL between Elasticsearch and Active Directory + +See +{ref}/configuring-tls.html#tls-active-directory[Encrypting communications between {es} and Active Directory]. diff --git a/x-pack/docs/en/security/authentication/built-in-users.asciidoc b/x-pack/docs/en/security/authentication/built-in-users.asciidoc new file mode 100644 index 0000000000000..5c6e9b6211b40 --- /dev/null +++ b/x-pack/docs/en/security/authentication/built-in-users.asciidoc @@ -0,0 +1,203 @@ +[role="xpack"] +[[built-in-users]] +=== Built-in users + +The {stack-security-features} provide built-in user credentials to help you get +up and running. These users have a fixed set of privileges and cannot be +authenticated until their passwords have been set. The `elastic` user can be +used to <>. + +`elastic`:: A built-in _superuser_. See <>. +`kibana`:: The user Kibana uses to connect and communicate with Elasticsearch. +`logstash_system`:: The user Logstash uses when storing monitoring information in Elasticsearch. +`beats_system`:: The user the Beats use when storing monitoring information in Elasticsearch. +`apm_system`:: The user the APM server uses when storing monitoring information in {es}. +`remote_monitoring_user`:: The user {metricbeat} uses when collecting and +storing monitoring information in {es}. It has the `remote_monitoring_agent` and +`remote_monitoring_collector` built-in roles. + + +[float] +[[built-in-user-explanation]] +==== How the built-in users work +These built-in users are stored in a special `.security` index, which is managed +by {es}. If a built-in user is disabled or its password +changes, the change is automatically reflected on each node in the cluster. If +your `.security` index is deleted or restored from a snapshot, however, any +changes you have applied are lost. + +Although they share the same API, the built-in users are separate and distinct +from users managed by the <>. Disabling the native +realm will not have any effect on the built-in users. The built-in users can +be disabled individually, using the +{ref}/security-api-disable-user.html[disable users API]. + +[float] +[[bootstrap-elastic-passwords]] +==== The Elastic bootstrap password + +When you install {es}, if the `elastic` user does not already have a password, +it uses a default bootstrap password. The bootstrap password is a transient +password that enables you to run the tools that set all the built-in user passwords. + +By default, the bootstrap password is derived from a randomized `keystore.seed` +setting, which is added to the keystore during installation. You do not need +to know or change this bootstrap password. If you have defined a +`bootstrap.password` setting in the keystore, however, that value is used instead. +For more information about interacting with the keystore, see +{ref}/secure-settings.html[Secure Settings]. + +NOTE: After you <>, +in particular for the `elastic` user, there is no further use for the bootstrap +password. + +[float] +[[set-built-in-user-passwords]] +==== Setting built-in user passwords + +You must set the passwords for all built-in users. + +The +elasticsearch-setup-passwords+ tool is the simplest method to set the +built-in users' passwords for the first time. It uses the `elastic` user's +bootstrap password to run user management API requests. For example, you can run +the command in an "interactive" mode, which prompts you to enter new passwords +for the `elastic`, `kibana`, `logstash_system`, `beats_system`, `apm_system`, +and `remote_monitoring_user` users: + +[source,shell] +-------------------------------------------------- +bin/elasticsearch-setup-passwords interactive +-------------------------------------------------- + +For more information about the command options, see +{ref}/setup-passwords.html[elasticsearch-setup-passwords]. + +IMPORTANT: After you set a password for the `elastic` user, the bootstrap +password is no longer valid; you cannot run the `elasticsearch-setup-passwords` +command a second time. + +Alternatively, you can set the initial passwords for the built-in users by using +the *Management > Users* page in {kib} or the +{ref}/security-api-change-password.html[Change Password API]. These methods are +more complex. You must supply the `elastic` user and its bootstrap password to +log into {kib} or run the API. This requirement means that you cannot use the +default bootstrap password that is derived from the `keystore.seed` setting. +Instead, you must explicitly set a `bootstrap.password` setting in the keystore +before you start {es}. For example, the following command prompts you to enter a +new bootstrap password: + +[source,shell] +---------------------------------------------------- +bin/elasticsearch-keystore add "bootstrap.password" +---------------------------------------------------- + +You can then start {es} and {kib} and use the `elastic` user and bootstrap +password to log into {kib} and change the passwords. Alternatively, you can +submit Change Password API requests for each built-in user. These methods are +better suited for changing your passwords after the initial setup is complete, +since at that point the bootstrap password is no longer required. + +[[add-built-in-user-passwords]] + +[float] +[[add-built-in-user-kibana]] +==== Adding built-in user passwords to {kib} + +After the `kibana` user password is set, you need to update the {kib} server +with the new password by setting `elasticsearch.password` in the `kibana.yml` +configuration file: + +[source,yaml] +----------------------------------------------- +elasticsearch.password: kibanapassword +----------------------------------------------- + +See {kibana-ref}/using-kibana-with-security.html[Configuring security in {kib}]. + +[float] +[[add-built-in-user-logstash]] +==== Adding built-in user passwords to {ls} + +The `logstash_system` user is used internally within Logstash when +monitoring is enabled for Logstash. + +To enable this feature in Logstash, you need to update the Logstash +configuration with the new password by setting `xpack.monitoring.elasticsearch.password` in +the `logstash.yml` configuration file: + +[source,yaml] +---------------------------------------------------------- +xpack.monitoring.elasticsearch.password: logstashpassword +---------------------------------------------------------- + +If you have upgraded from an older version of Elasticsearch, +the `logstash_system` user may have defaulted to _disabled_ for security reasons. +Once the password has been changed, you can enable the user via the following API call: + +[source,js] +--------------------------------------------------------------------- +PUT _xpack/security/user/logstash_system/_enable +--------------------------------------------------------------------- +// CONSOLE + +See {logstash-ref}/ls-security.html#ls-monitoring-user[Configuring credentials for {ls} monitoring]. + +[float] +[[add-built-in-user-beats]] +==== Adding built-in user passwords to Beats + +The `beats_system` user is used internally within Beats when monitoring is +enabled for Beats. + +To enable this feature in Beats, you need to update the configuration for each +of your beats to reference the correct username and password. For example: + +[source,yaml] +---------------------------------------------------------- +xpack.monitoring.elasticsearch.username: beats_system +xpack.monitoring.elasticsearch.password: beatspassword +---------------------------------------------------------- + +For example, see {metricbeat-ref}/monitoring.html[Monitoring {metricbeat}]. + +The `remote_monitoring_user` is used when {metricbeat} collects and stores +monitoring data for the {stack}. See <>. + +If you have upgraded from an older version of {es}, then you may not have set a +password for the `beats_system` or `remote_monitoring_user` users. If this is +the case, then you should use the *Management > Users* page in {kib} or the +{ref}/security-api-change-password.html[Change Password API] to set a password +for these users. + +[float] +[[add-built-in-user-apm]] +==== Adding built-in user passwords to APM + +The `apm_system` user is used internally within APM when monitoring is enabled. + +To enable this feature in APM, you need to update the APM configuration file to +reference the correct username and password. For example: + +[source,yaml] +---------------------------------------------------------- +xpack.monitoring.elasticsearch.username: apm_system +xpack.monitoring.elasticsearch.password: apmserverpassword +---------------------------------------------------------- + +//See {apm-server-ref}/monitoring.html[Monitoring APM Server]. + +If you have upgraded from an older version of {es}, then you may not have set a +password for the `apm_system` user. If this is the case, +then you should use the *Management > Users* page in {kib} or the +{ref}/security-api-change-password.html[Change Password API] to set a password +for these users. + +[float] +[[disabling-default-password]] +==== Disabling default password functionality +[IMPORTANT] +============================================================================= +This setting is deprecated. The elastic user no longer has a default password. +The password must be set before the user can be used. +See <>. +============================================================================= diff --git a/x-pack/docs/en/security/authentication/file-realm.asciidoc b/x-pack/docs/en/security/authentication/file-realm.asciidoc new file mode 100644 index 0000000000000..9261c33eb1f43 --- /dev/null +++ b/x-pack/docs/en/security/authentication/file-realm.asciidoc @@ -0,0 +1,27 @@ +[role="xpack"] +[[file-realm]] +=== File-based user authentication + +You can manage and authenticate users with the built-in `file` realm. +With the `file` realm, users are defined in local files on each node in the cluster. + +IMPORTANT: As the administrator of the cluster, it is your responsibility to +ensure the same users are defined on every node in the cluster. The {stack} +{security-features} do not deliver any mechanism to guarantee this. + +The `file` realm is primarily supported to serve as a fallback/recovery realm. It +is mostly useful in situations where all users locked themselves out of the system +(no one remembers their username/password). In this type of scenarios, the `file` +realm is your only way out - you can define a new `admin` user in the `file` realm +and use it to log in and reset the credentials of all other users. + +IMPORTANT: When you configure realms in `elasticsearch.yml`, only the realms you +specify are used for authentication. To use the `file` realm as a fallback, you +must include it in the realm chain. + +To define users, the {security-features} provide the +{ref}/users-command.html[users] command-line tool. This tool enables you to add +and remove users, assign user roles, and manage user passwords. + +For more information, see +{ref}/configuring-file-realm.html[Configuring a file realm]. diff --git a/x-pack/docs/en/security/authentication/index.asciidoc b/x-pack/docs/en/security/authentication/index.asciidoc new file mode 100644 index 0000000000000..7a0d469fe6670 --- /dev/null +++ b/x-pack/docs/en/security/authentication/index.asciidoc @@ -0,0 +1,22 @@ + +include::overview.asciidoc[] +include::built-in-users.asciidoc[] +include::internal-users.asciidoc[] +include::token-authentication-services.asciidoc[] +include::realms.asciidoc[] +include::realm-chains.asciidoc[] +include::active-directory-realm.asciidoc[] +include::file-realm.asciidoc[] +include::ldap-realm.asciidoc[] +include::native-realm.asciidoc[] +include::pki-realm.asciidoc[] +include::saml-realm.asciidoc[] +include::kerberos-realm.asciidoc[] + +include::{xes-repo-dir}/security/authentication/custom-realm.asciidoc[] + +include::{xes-repo-dir}/security/authentication/anonymous-access.asciidoc[] + +include::{xes-repo-dir}/security/authentication/user-cache.asciidoc[] + +include::{xes-repo-dir}/security/authentication/saml-guide.asciidoc[] diff --git a/x-pack/docs/en/security/authentication/internal-users.asciidoc b/x-pack/docs/en/security/authentication/internal-users.asciidoc new file mode 100644 index 0000000000000..7ae7dc5bc1723 --- /dev/null +++ b/x-pack/docs/en/security/authentication/internal-users.asciidoc @@ -0,0 +1,14 @@ +[role="xpack"] +[[internal-users]] +=== Internal users + +The {stack-security-features} use three _internal_ users (`_system`, `_xpack`, +and `_xpack_security`), which are responsible for the operations that take place +inside an {es} cluster. + +These users are only used by requests that originate from within the cluster. +For this reason, they cannot be used to authenticate against the API and there +is no password to manage or reset. + +From time-to-time you may find a reference to one of these users inside your +logs, including <>. diff --git a/x-pack/docs/en/security/authentication/kerberos-realm.asciidoc b/x-pack/docs/en/security/authentication/kerberos-realm.asciidoc new file mode 100644 index 0000000000000..a8d6a74cb6092 --- /dev/null +++ b/x-pack/docs/en/security/authentication/kerberos-realm.asciidoc @@ -0,0 +1,63 @@ +[role="xpack"] +[[kerberos-realm]] +=== Kerberos authentication + +You can configure the {stack} {security-features} to support Kerberos V5 +authentication, an industry standard protocol to authenticate users in {es}. + +NOTE: You cannot use the Kerberos realm to authenticate users in {kib} +and on the transport network layer. + +To authenticate users with Kerberos, you need to +{ref}/configuring-kerberos-realm.html[configure a Kerberos realm] and +<>. +For more information on realm settings, see +{ref}/security-settings.html#ref-kerberos-settings[Kerberos realm settings]. + +[[kerberos-terms]] +==== Key concepts + +There are a few terms and concepts that you'll encounter when you're setting up +Kerberos realms: + +_kdc_:: +Key Distribution Center. A service that issues Kerberos tickets. + +_principal_:: +A Kerberos principal is a unique identity to which Kerberos can assign +tickets. It can be used to identify a user or a service provided by a +server. ++ +-- +Kerberos V5 principal names are of format `primary/instance@REALM`, where +`primary` is a user name. + +`instance` is an optional string that qualifies the primary and is separated +by a slash(`/`) from the primary. For a user, usually it is not used; for +service hosts, it is the fully qualified domain name of the host. + +`REALM` is the Kerberos realm. Usually it is is the domain name in upper case. +An example of a typical user principal is `user@ES.DOMAIN.LOCAL`. An example of +a typical service principal is `HTTP/es.domain.local@ES.DOMAIN.LOCAL`. +-- + +_realm_:: +Realms define the administrative boundary within which the authentication server +has authority to authenticate users and services. + +_keytab_:: +A file that stores pairs of principals and encryption keys. + +IMPORTANT: Anyone with read permissions to this file can use the +credentials in the network to access other services so it is important +to protect it with proper file permissions. + +_krb5.conf_:: +A file that contains Kerberos configuration information such as the default realm +name, the location of Key distribution centers (KDC), realms information, +mappings from domain names to Kerberos realms, and default configurations for +realm session key encryption types. + +_ticket granting ticket (TGT)_:: +A TGT is an authentication ticket generated by the Kerberos authentication +server. It contains an encrypted authenticator. \ No newline at end of file diff --git a/x-pack/docs/en/security/authentication/ldap-realm.asciidoc b/x-pack/docs/en/security/authentication/ldap-realm.asciidoc new file mode 100644 index 0000000000000..4d90ba0a1c659 --- /dev/null +++ b/x-pack/docs/en/security/authentication/ldap-realm.asciidoc @@ -0,0 +1,88 @@ +[role="xpack"] +[[ldap-realm]] +=== LDAP user authentication + +You can configure the {stack} {security-features} to communicate with a +Lightweight Directory Access Protocol (LDAP) server to authenticate users. To +integrate with LDAP, you configure an `ldap` realm and map LDAP groups to user +roles in the <>. + +LDAP stores users and groups hierarchically, similar to the way folders are +grouped in a file system. An LDAP directory's hierarchy is built from containers +such as the _organizational unit_ (`ou`), _organization_ (`o`), and +_domain controller_ (`dc`). + +The path to an entry is a _Distinguished Name_ (DN) that uniquely identifies a +user or group. User and group names typically have attributes such as a +_common name_ (`cn`) or _unique ID_ (`uid`). A DN is specified as a string, +for example `"cn=admin,dc=example,dc=com"` (white spaces are ignored). + +The `ldap` realm supports two modes of operation, a user search mode +and a mode with specific templates for user DNs. + +[[ldap-user-search]] +==== User search mode and user DN templates mode + +See {ref}/configuring-ldap-realm.html[Configuring an LDAP Realm]. + +[[ldap-load-balancing]] +==== Load balancing and failover +The `load_balance.type` setting can be used at the realm level to configure how +the {security-features} should interact with multiple LDAP servers. The +{security-features} support both failover and load balancing modes of operation. + +See +{ref}/security-settings.html#load-balancing[Load balancing and failover settings]. + +[[ldap-settings]] +==== LDAP realm settings + +See {ref}/security-settings.html#ref-ldap-settings[LDAP realm settings]. + +[[mapping-roles-ldap]] +==== Mapping LDAP groups to roles + +An integral part of a realm authentication process is to resolve the roles +associated with the authenticated user. Roles define the privileges a user has +in the cluster. + +Since with the `ldap` realm the users are managed externally in the LDAP server, +the expectation is that their roles are managed there as well. If fact, LDAP +supports the notion of groups, which often represent user roles for different +systems in the organization. + +The `ldap` realm enables you to map LDAP users to roles via their LDAP +groups, or other metadata. This role mapping can be configured via the +{ref}/security-api-put-role-mapping.html[add role mapping API] or by using a +file stored on each node. When a user authenticates with LDAP, the privileges +for that user are the union of all privileges defined by the roles to which +the user is mapped. For more information, see +{ref}/configuring-ldap-realm.html[Configuring an LDAP realm]. + +[[ldap-user-metadata]] +==== User metadata in LDAP realms +When a user is authenticated via an LDAP realm, the following properties are +populated in the user's _metadata_: + +|======================= +| Field | Description +| `ldap_dn` | The distinguished name of the user. +| `ldap_groups` | The distinguished name of each of the groups that were + resolved for the user (regardless of whether those + groups were mapped to a role). +|======================= + +This metadata is returned in the +{ref}/security-api-authenticate.html[authenticate API], and can be used with +<> in roles. + +Additional fields can be included in the user's metadata by configuring +the `metadata` setting on the LDAP realm. This metadata is available for use +with the <> or in +<>. + +[[ldap-ssl]] +==== Setting up SSL between Elasticsearch and LDAP + +See +{ref}/configuring-tls.html#tls-ldap[Encrypting communications between {es} and LDAP]. diff --git a/x-pack/docs/en/security/authentication/native-realm.asciidoc b/x-pack/docs/en/security/authentication/native-realm.asciidoc new file mode 100644 index 0000000000000..7905064b5cc66 --- /dev/null +++ b/x-pack/docs/en/security/authentication/native-realm.asciidoc @@ -0,0 +1,32 @@ +[role="xpack"] +[[native-realm]] +=== Native user authentication + +The easiest way to manage and authenticate users is with the internal `native` +realm. You can use the REST APIs or Kibana to add and remove users, assign user +roles, and manage user passwords. + +[[native-realm-configuration]] +[float] +==== Configuring a native realm + +See {ref}/configuring-native-realm.html[Configuring a native realm]. + +[[native-settings]] +==== Native realm settings + +See {ref}/security-settings.html#ref-native-settings[Native realm settings]. + +[[managing-native-users]] +==== Managing native users + +The {stack} {security-features} enable you to easily manage users in {kib} on the +*Management / Security / Users* page. + +Alternatively, you can manage users through the `user` API. For more +information and examples, see +{ref}/security-api.html#security-user-apis[user management APIs]. + +[[migrating-from-file]] +NOTE: To migrate file-based users to the `native` realm, use the +{ref}/migrate-tool.html[migrate tool]. diff --git a/x-pack/docs/en/security/authentication/overview.asciidoc b/x-pack/docs/en/security/authentication/overview.asciidoc new file mode 100644 index 0000000000000..46f8b65b22ff1 --- /dev/null +++ b/x-pack/docs/en/security/authentication/overview.asciidoc @@ -0,0 +1,31 @@ +[role="xpack"] +[[setting-up-authentication]] +== User authentication + +Authentication identifies an individual. To gain access to restricted resources, +a user must prove their identity, via passwords, credentials, or some other +means (typically referred to as authentication tokens). + +The {stack} authenticates users by identifying the users behind the requests +that hit the cluster and verifying that they are who they claim to be. The +authentication process is handled by one or more authentication services called +<>. + +You can use the native support for managing and authenticating users, or +integrate with external user management systems such as LDAP and Active +Directory. + +The {stack-security-features} provide built-in realms such as `native`,`ldap`, +`active_directory`, `pki`, `file`, and `saml`. If none of the built-in realms +meet your needs, you can also build your own custom realm and plug it into the +{stack}. + +When {security-features} are enabled, depending on the realms you've configured, +you must attach your user credentials to the requests sent to {es}. For example, +when using realms that support usernames and passwords you can simply attach +{wikipedia}/Basic_access_authentication[basic auth] header to the requests. + +The {security-features} provide two services: the token service and the api key +service. You can use these services to exchange the current authentication for +a token or key. This token or key can then be used as credentials for authenticating +new requests. These services are enabled by default when TLS/SSL is enabled for HTTP. diff --git a/x-pack/docs/en/security/authentication/pki-realm.asciidoc b/x-pack/docs/en/security/authentication/pki-realm.asciidoc new file mode 100644 index 0000000000000..41976c0425a06 --- /dev/null +++ b/x-pack/docs/en/security/authentication/pki-realm.asciidoc @@ -0,0 +1,21 @@ +[role="xpack"] +[[pki-realm]] +=== PKI user authentication + +You can configure {stack} {security-features} to use Public Key Infrastructure +(PKI) certificates to authenticate users in {es}. This requires clients to +present X.509 certificates. + +NOTE: You cannot use PKI certificates to authenticate users in {kib}. + +To use PKI in {es}, you configure a PKI realm, enable client authentication on +the desired network layers (transport or http), and map the Distinguished Names +(DNs) from the user certificates to roles in the +<>. + +See {ref}/configuring-pki-realm.html[Configuring a PKI realm]. + +[[pki-settings]] +==== PKI realm settings + +See {ref}/security-settings.html#ref-pki-settings[PKI realm settings]. diff --git a/x-pack/docs/en/security/authentication/realm-chains.asciidoc b/x-pack/docs/en/security/authentication/realm-chains.asciidoc new file mode 100644 index 0000000000000..d27106047fa5a --- /dev/null +++ b/x-pack/docs/en/security/authentication/realm-chains.asciidoc @@ -0,0 +1,98 @@ +[role="xpack"] +[[realm-chains]] +=== Realm chains + +<> live within a _realm chain_. It is essentially a prioritized list of +configured realms (typically of various types). The order of the list determines +the order in which the realms will be consulted. You should make sure each +configured realm has a distinct `order` setting. In the event that two or more +realms have the same `order`, they will be processed in `name` order. +During the authentication process, {stack} {security-features} will consult and +try to authenticate the request one realm at a time. +Once one of the realms successfully authenticates the request, the authentication +is considered to be successful and the authenticated user will be associated +with the request (which will then proceed to the authorization phase). If a realm +cannot authenticate the request, the next in line realm in the chain will be +consulted. If all realms in the chain could not authenticate the request, the +authentication is then considered to be unsuccessful and an authentication error +will be returned (as HTTP status code `401`). + +NOTE: Some systems (e.g. Active Directory) have a temporary lock-out period after + several successive failed login attempts. If the same username exists in + multiple realms, unintentional account lockouts are possible. For more + information, please see <>. + +The default realm chain contains the `native` and `file` realms. To explicitly, +configure a realm chain, you specify the chain in `elasticsearch.yml`. When you +configure a realm chain, only the realms you specify are used for authentication. +To use the `native` and `file` realms, you must include them in the chain. + +The following snippet configures a realm chain that includes the `file` and +`native` realms, as well as two LDAP realms and an Active Directory realm. + +[source,yaml] +---------------------------------------- +xpack.security.authc: + realms: + + file: + type: file + order: 0 + + native: + type: native + order: 1 + + ldap1: + type: ldap + order: 2 + enabled: false + url: 'url_to_ldap1' + ... + + ldap2: + type: ldap + order: 3 + url: 'url_to_ldap2' + ... + + ad1: + type: active_directory + order: 4 + url: 'url_to_ad' +---------------------------------------- + +As can be seen above, each realm has a unique name that identifies it and each +realm type dictates its own set of required and optional settings. That said, +there are +{ref}/security-settings.html#ref-realm-settings[settings that are common to all realms]. + +[[authorization_realms]] +==== Delegating authorization to another realm + +Some realms have the ability to perform _authentication_ internally, but delegate the +lookup and assignment of roles (that is, _authorization_) to another realm. + +For example, you may wish to use a PKI realm to authenticate your users with +TLS client certificates, but then lookup that user in an LDAP realm and use +their LDAP group assignments to determine their roles in Elasticsearch. + +Any realm that supports retrieving users (without needing their credentials) +can be used as an _authorization realm_ (that is, its name may appear as one of +the values in the list of `authorization_realms`). See <> for +further explanation on which realms support this. + +For realms that support this feature, it can be enabled by configuring the +`authorization_realms` setting on the authenticating realm. Check the list of +{ref}/security-settings.html#realm-settings[supported settings] for each realm to see if they support the `authorization_realms` setting. + +If delegated authorization is enabled for a realm, it authenticates the user in +its standard manner (including relevant caching) then looks for that user in the +configured list of authorization realms. It tries each realm in the order they +are specified in the `authorization_realms` setting. The user is retrieved by +principal - the user must have identical usernames in the _authentication_ and +_authorization realms_. If the user cannot be found in any of the authorization +realms, authentication fails. + +NOTE: Delegated authorization requires a +https://www.elastic.co/subscriptions[Platinum or Trial license]. diff --git a/x-pack/docs/en/security/authentication/realms.asciidoc b/x-pack/docs/en/security/authentication/realms.asciidoc new file mode 100644 index 0000000000000..e57493a80bfa6 --- /dev/null +++ b/x-pack/docs/en/security/authentication/realms.asciidoc @@ -0,0 +1,67 @@ +[role="xpack"] +[[realms]] +=== Realms + +The {stack-security-features} authenticate users by using realms and one or more +<>. + +A _realm_ is used to resolve and authenticate users based on authentication +tokens. The {security-features} provide the following built-in realms: + +_native_:: +An internal realm where users are stored in a dedicated {es} index. +This realm supports an authentication token in the form of username and password, +and is available by default when no realms are explicitly configured. The users +are managed via the {ref}/security-api.html#security-user-apis[user management APIs]. +See <>. + +_ldap_:: +A realm that uses an external LDAP server to authenticate the +users. This realm supports an authentication token in the form of username and +password, and requires explicit configuration in order to be used. See +<>. + +_active_directory_:: +A realm that uses an external Active Directory Server to authenticate the +users. With this realm, users are authenticated by usernames and passwords. +See <>. + +_pki_:: +A realm that authenticates users using Public Key Infrastructure (PKI). This +realm works in conjunction with SSL/TLS and identifies the users through the +Distinguished Name (DN) of the client's X.509 certificates. See <>. + +_file_:: +An internal realm where users are defined in files stored on each node in the +{es} cluster. This realm supports an authentication token in the form +of username and password and is always available. See <>. + +_saml_:: +A realm that facilitates authentication using the SAML 2.0 Web SSO protocol. +This realm is designed to support authentication through {kib} and is not +intended for use in the REST API. See <>. + +_kerberos_:: +A realm that authenticates a user using Kerberos authentication. Users are +authenticated on the basis of Kerberos tickets. See <>. + +The {security-features} also support custom realms. If you need to integrate +with another authentication system, you can build a custom realm plugin. For +more information, see <>. + +==== Internal and external realms + +Realm types can roughly be classified in two categories: + +Internal:: Realms that are internal to Elasticsearch and don't require any +communication with external parties. They are fully managed by the {stack} +{security-features}. There can only be a maximum of one configured realm per +internal realm type. The {security-features} provide two internal realm types: +`native` and `file`. + +External:: Realms that require interaction with parties/components external to +{es}, typically, with enterprise grade identity management systems. Unlike +internal realms, there can be as many external realms as one would like - each +with its own unique name and configuration. The {security-features} provide the +following external realm types: `ldap`, `active_directory`, `saml`, `kerberos`, +and `pki`. diff --git a/x-pack/docs/en/security/authentication/saml-guide.asciidoc b/x-pack/docs/en/security/authentication/saml-guide.asciidoc index 2fdf8b8d4fae1..0ff903213e924 100644 --- a/x-pack/docs/en/security/authentication/saml-guide.asciidoc +++ b/x-pack/docs/en/security/authentication/saml-guide.asciidoc @@ -869,6 +869,7 @@ It is possible to have one or more {kib} instances that use SAML, while other instances use basic authentication against another realm type (e.g. <> or <>). +[[saml-troubleshooting]] === Troubleshooting SAML Realm Configuration The SAML 2.0 specification offers a lot of options and flexibility for the implementers diff --git a/x-pack/docs/en/security/authentication/saml-realm.asciidoc b/x-pack/docs/en/security/authentication/saml-realm.asciidoc new file mode 100644 index 0000000000000..44fc0ff5b8b32 --- /dev/null +++ b/x-pack/docs/en/security/authentication/saml-realm.asciidoc @@ -0,0 +1,41 @@ +[role="xpack"] +[[saml-realm]] +=== SAML authentication +The {stack} {security-features} support user authentication using SAML +single sign-on (SSO). The {security-features} provide this support using the Web +Browser SSO profile of the SAML 2.0 protocol. + +This protocol is specifically designed to support authentication via an +interactive web browser, so it does not operate as a standard authentication +realm. Instead, there are {kib} and {es} {security-features} that work +together to enable interactive SAML sessions. + +This means that the SAML realm is not suitable for use by standard REST clients. +If you configure a SAML realm for use in {kib}, you should also configure +another realm, such as the <> in your authentication +chain. + +In order to simplify the process of configuring SAML authentication within the +Elastic Stack, there is a step-by-step guide to +<>. + +The remainder of this document will describe {es} specific configuration options +for SAML realms. + +[[saml-settings]] +==== SAML realm settings + +See {ref}/security-settings.html#ref-saml-settings[SAML realm settings]. + +==== SAML realm signing settings + +See {ref}/security-settings.html#ref-saml-signing-settings[SAML realm signing settings]. + +==== SAML realm encryption settings + +See {ref}/security-settings.html#ref-saml-encryption-settings[SAML realm encryption settings]. + +==== SAML realm SSL settings + +See {ref}/security-settings.html#ref-saml-ssl-settings[SAML realm SSL settings]. + diff --git a/x-pack/docs/en/security/authentication/token-authentication-services.asciidoc b/x-pack/docs/en/security/authentication/token-authentication-services.asciidoc new file mode 100644 index 0000000000000..04e8238a89ed3 --- /dev/null +++ b/x-pack/docs/en/security/authentication/token-authentication-services.asciidoc @@ -0,0 +1,58 @@ +[role="xpack"] +[[token-authentication-services]] +=== Token-based authentication services + +The {stack-security-features} authenticate users by using realms and one or more token-based +authentication services. The token-based authentication services are used for +authentication and for the management of tokens. These tokens can be used as +credentials attached to requests that are sent to {es}. When {es} receives a request +that must be authenticated, it consults first the token-based authentication +services then the realm chain. + +The {security-features} provide the following built-in token-based authentication +services, which are listed in the order they are consulted: + +_token-service_:: +The token service uses the {ref}/security-api-get-token.html[get token API] to +generate access tokens and refresh tokens based on the OAuth2 specification. +The access token is a short-lived token. By default, it expires after 20 minutes +but it can be configured to last a maximum of 1 hour. It can be refreshed by +using a refresh token, which has a lifetime of 24 hours. The access token is a +bearer token. You can use it by sending a request with an `Authorization` +header with a value that has the prefix "Bearer " followed by the value of the +access token. For example: ++ +-- +[source,shell] +-------------------------------------------------- +curl -H "Authorization: Bearer dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==" http://localhost:9200/_cluster/health +-------------------------------------------------- +// NOTCONSOLE +-- + +_api-key-service_:: +The API key service uses the +{ref}/security-api-create-api-key.html[create API key API] to generate API keys. +By default, the API keys do not expire. When you make a request to create API +keys, you can specify an expiration and permissions for the API key. The +permissions are limited by the authenticated user's permissions. You can use the +API key by sending a request with an `Authorization` header with a value that +has the prefix "ApiKey " followed by the credentials. The credentials are the +base64 encoding of the API key ID and the API key joined by a colon. For example: ++ +-- +[source,shell] +-------------------------------------------------- +curl -H "Authorization: ApiKey VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw==" http://localhost:9200/_cluster/health +-------------------------------------------------- +// NOTCONSOLE +-- + +Depending on your use case, you may want to decide on the lifetime of the tokens +generated by these services. You can then use this information to decide which +service to use to generate and manage the tokens. Non-expiring API keys may seem +like the easy option but you must consider the security implications that come +with non-expiring keys. Both the _token-service_ and _api-key-service_ permit +you to invalidate the tokens. See +{ref}/security-api-invalidate-token.html[invalidate token API] and +{ref}/security-api-invalidate-api-key.html[invalidate API key API]. diff --git a/x-pack/docs/en/security/authorization/built-in-roles.asciidoc b/x-pack/docs/en/security/authorization/built-in-roles.asciidoc new file mode 100644 index 0000000000000..f17bcb391f909 --- /dev/null +++ b/x-pack/docs/en/security/authorization/built-in-roles.asciidoc @@ -0,0 +1,145 @@ +[role="xpack"] +[[built-in-roles]] +=== Built-in roles + +The {stack-security-features} apply a default role to all users, including +<>. The default role enables users to access +the authenticate endpoint, change their own passwords, and get information about +themselves. + +There is also a set of built-in roles you can explicitly assign to users. These +roles have a fixed set of privileges and cannot be updated. + +[[built-in-roles-apm-system]] `apm_system` :: +Grants access necessary for the APM system user to send system-level data +(such as monitoring) to {es}. + +[[built-in-roles-beats-admin]] `beats_admin` :: +Grants access to the `.management-beats` index, which contains configuration +information for the Beats. + +[[built-in-roles-beats-system]] `beats_system` :: +Grants access necessary for the Beats system user to send system-level data +(such as monitoring) to {es}. ++ +-- +[NOTE] +=============================== +* This role should not be assigned to users as the granted permissions may +change between releases. +* This role does not provide access to the beats indices and is not +suitable for writing beats output to {es}. +=============================== + +-- + +[[built-in-roles-ingest-user]] `ingest_admin` :: +Grants access to manage *all* index templates and *all* ingest pipeline configurations. ++ +NOTE: This role does *not* provide the ability to create indices; those privileges +must be defined in a separate role. + +[[built-in-roles-kibana-dashboard]] `kibana_dashboard_only_user` :: +Grants access to the {kib} Dashboard and read-only permissions to Kibana. +This role does not have access to editing tools in {kib}. For more +information, see +{kibana-ref}/xpack-dashboard-only-mode.html[{kib} Dashboard Only Mode]. + +[[built-in-roles-kibana-system]] `kibana_system` :: +Grants access necessary for the {kib} system user to read from and write to the +{kib} indices, manage index templates and tokens, and check the availability of +the {es} cluster. This role grants read access to the `.monitoring-*` indices +and read and write access to the `.reporting-*` indices. For more information, +see {kibana-ref}/using-kibana-with-security.html[Configuring Security in {kib}]. ++ +NOTE: This role should not be assigned to users as the granted permissions may +change between releases. + +[[built-in-roles-kibana-user]] `kibana_user`:: +Grants access to all features in {kib}. For more information on Kibana authorization, +see {kibana-ref}/xpack-security-authorization.html[Kibana Authorization]. + +[[built-in-roles-logstash-admin]] `logstash_admin` :: +Grants access to the `.logstash*` indices for managing configurations. + +[[built-in-roles-logstash-system]] `logstash_system` :: +Grants access necessary for the Logstash system user to send system-level data +(such as monitoring) to {es}. For more information, see +{logstash-ref}/ls-security.html[Configuring Security in Logstash]. ++ +-- +[NOTE] +=============================== +* This role should not be assigned to users as the granted permissions may +change between releases. +* This role does not provide access to the logstash indices and is not +suitable for use within a Logstash pipeline. +=============================== +-- + +[[built-in-roles-ml-admin]] `machine_learning_admin`:: +Grants `manage_ml` cluster privileges, read access to `.ml-anomalies*`, +`.ml-notifications*`, `.ml-state*`, `.ml-meta*` indices and write access to +`.ml-annotations*` indices. + +[[built-in-roles-ml-user]] `machine_learning_user`:: +Grants the minimum privileges required to view {ml} configuration, +status, and work with results. This role grants `monitor_ml` cluster privileges, +read access to the `.ml-notifications` and `.ml-anomalies*` indices +(which store {ml} results), and write access to `.ml-annotations*` indices. + +[[built-in-roles-monitoring-user]] `monitoring_user`:: +Grants the minimum privileges required for any user of {monitoring} other than those +required to use {kib}. This role grants access to the monitoring indices and grants +privileges necessary for reading basic cluster information. Monitoring users should +also be assigned the `kibana_user` role. + +[[built-in-roles-remote-monitoring-agent]] `remote_monitoring_agent`:: +Grants the minimum privileges required to write data into the monitoring indices +(`.monitoring-*`). This role also has the privileges necessary to create +{metricbeat} indices (`metricbeat-*`) and write data into them. + +[[built-in-roles-remote-monitoring-collector]] `remote_monitoring_collector`:: +Grants the minimum privileges required to collect monitoring data for the {stack}. + +[[built-in-roles-reporting-user]] `reporting_user`:: +Grants the specific privileges required for users of {reporting} other than those +required to use {kib}. This role grants access to the reporting indices; each +user has access to only their own reports. Reporting users should also be +assigned the `kibana_user` role and a role that grants them access to the data +that will be used to generate reports. + +[[built-in-roles-snapshot-user]] `snapshot_user`:: +Grants the necessary privileges to create snapshots of **all** the indices and +to view their metadata. This role enables users to view the configuration of +existing snapshot repositories and snapshot details. It does not grant authority +to remove or add repositories or to restore snapshots. It also does not enable +to change index settings or to read or update index data. + +[[built-in-roles-superuser]] `superuser`:: +Grants full access to the cluster, including all indices and data. A user with +the `superuser` role can also manage users and roles and +<> any other user in the system. Due to the +permissive nature of this role, take extra care when assigning it to a user. + +[[built-in-roles-transport-client]] `transport_client`:: +Grants the privileges required to access the cluster through the Java Transport +Client. The Java Transport Client fetches information about the nodes in the +cluster using the _Node Liveness API_ and the _Cluster State API_ (when +sniffing is enabled). Assign your users this role if they use the +Transport Client. ++ +NOTE: Using the Transport Client effectively means the users are granted access +to the cluster state. This means users can view the metadata over all indices, +index templates, mappings, node and basically everything about the cluster. +However, this role does not grant permission to view the data in all indices. + +[[built-in-roles-watcher-admin]] `watcher_admin`:: ++ +Grants read access to the `.watches` index, read access to the watch history and +the triggered watches index and allows to execute all watcher actions. + +[[built-in-roles-watcher-user]] `watcher_user`:: ++ +Grants read access to the `.watches` index, the get watch action and the watcher +stats. diff --git a/x-pack/docs/en/security/authorization/document-level-security.asciidoc b/x-pack/docs/en/security/authorization/document-level-security.asciidoc new file mode 100644 index 0000000000000..badbf15ed6159 --- /dev/null +++ b/x-pack/docs/en/security/authorization/document-level-security.asciidoc @@ -0,0 +1,60 @@ +[role="xpack"] +[[document-level-security]] +=== Document level security + +Document level security restricts the documents that users have read access to. +In particular, it restricts which documents can be accessed from document-based +read APIs. + +To enable document level security, you use a query to specify the documents that +each role can access. The document query is associated with a particular index +or index pattern and operates in conjunction with the privileges specified for +the indices. + +The following role definition grants read access only to documents that +belong to the `click` category within all the `events-*` indices: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/click_role +{ + "indices": [ + { + "names": [ "events-*" ], + "privileges": [ "read" ], + "query": "{\"match\": {\"category\": \"click\"}}" + } + ] +} +-------------------------------------------------- +// CONSOLE + +NOTE: Omitting the `query` entry entirely disables document level security for + the respective indices permission entry. + +The specified `query` expects the same format as if it was defined in the +search request and supports the full {es} {ref}/query-dsl.html[Query DSL]. + +For example, the following role grants read access only to the documents whose +`department_id` equals `12`: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/dept_role +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "query" : { + "term" : { "department_id" : 12 } + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +NOTE: `query` also accepts queries written as string values. + +For more information, see <>. \ No newline at end of file diff --git a/x-pack/docs/en/security/authorization/field-level-security.asciidoc b/x-pack/docs/en/security/authorization/field-level-security.asciidoc new file mode 100644 index 0000000000000..1a8bc2fe2e0c8 --- /dev/null +++ b/x-pack/docs/en/security/authorization/field-level-security.asciidoc @@ -0,0 +1,230 @@ +[role="xpack"] +[[field-level-security]] +=== Field level security + +Field level security restricts the fields that users have read access to. +In particular, it restricts which fields can be accessed from document-based +read APIs. + +To enable field level security, specify the fields that each role can access +as part of the indices permissions in a role definition. Field level security is +thus bound to a well-defined set of indices (and potentially a set of +<>). + +The following role definition grants read access only to the `category`, +`@timestamp`, and `message` fields in all the `events-*` indices. + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/test_role1 +{ + "indices": [ + { + "names": [ "events-*" ], + "privileges": [ "read" ], + "field_security" : { + "grant" : [ "category", "@timestamp", "message" ] + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +Access to the following meta fields is always allowed: `_id`, +`_type`, `_parent`, `_routing`, `_timestamp`, `_ttl`, `_size` and `_index`. If +you specify an empty list of fields, only these meta fields are accessible. + +NOTE: Omitting the fields entry entirely disables field level security. + +You can also specify field expressions. For example, the following +example grants read access to all fields that start with an `event_` prefix: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/test_role2 +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "grant" : [ "event_*" ] + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +Use the dot notations to refer to nested fields in more complex documents. For +example, assuming the following document: + +[source,js] +-------------------------------------------------- +{ + "customer": { + "handle": "Jim", + "email": "jim@mycompany.com", + "phone": "555-555-5555" + } +} +-------------------------------------------------- +// NOTCONSOLE + +The following role definition enables only read access to the customer `handle` +field: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/test_role3 +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "grant" : [ "customer.handle" ] + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +This is where wildcard support shines. For example, use `customer.*` to enable +only read access to the `customer` data: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/test_role4 +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "grant" : [ "customer.*" ] + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +You can deny permission to access fields with the following syntax: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/test_role5 +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "grant" : [ "*"], + "except": [ "customer.handle" ] + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +The following rules apply: + +* The absence of `field_security` in a role is equivalent to * access. +* If permission has been granted explicitly to some fields, you can specify +denied fields. The denied fields must be a subset of the fields to which +permissions were granted. +* Defining denied and granted fields implies access to all granted fields except +those which match the pattern in the denied fields. + +For example: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/test_role6 +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "except": [ "customer.handle" ], + "grant" : [ "customer.*" ] + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +In the above example, users can read all fields with the prefix "customer." +except for "customer.handle". + +An empty array for `grant` (for example, `"grant" : []`) means that access has +not been granted to any fields. + +When a user has several roles that specify field level permissions, the +resulting field level permissions per index are the union of the individual role +permissions. For example, if these two roles are merged: + +[source,js] +-------------------------------------------------- +POST /_xpack/security/role/test_role7 +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "grant": [ "a.*" ], + "except" : [ "a.b*" ] + } + } + ] +} + +POST /_xpack/security/role/test_role8 +{ + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "grant": [ "a.b*" ], + "except" : [ "a.b.c*" ] + } + } + ] +} +-------------------------------------------------- +// CONSOLE + +The resulting permission is equal to: + +[source,js] +-------------------------------------------------- +{ + // role 1 + role 2 + ... + "indices" : [ + { + "names" : [ "*" ], + "privileges" : [ "read" ], + "field_security" : { + "grant": [ "a.*" ], + "except" : [ "a.b.c*" ] + } + } + ] +} +-------------------------------------------------- +// NOTCONSOLE + +NOTE: Field-level security should not be set on {ref}/alias.html[`alias`] fields. To secure a +concrete field, its field name must be used directly. + +For more information, see <>. \ No newline at end of file diff --git a/x-pack/docs/en/security/authorization/images/authorization.png b/x-pack/docs/en/security/authorization/images/authorization.png new file mode 100644 index 0000000000000..1d692f2e3a9e2 Binary files /dev/null and b/x-pack/docs/en/security/authorization/images/authorization.png differ diff --git a/x-pack/docs/en/security/authorization/index.asciidoc b/x-pack/docs/en/security/authorization/index.asciidoc new file mode 100644 index 0000000000000..d6df16e41e04d --- /dev/null +++ b/x-pack/docs/en/security/authorization/index.asciidoc @@ -0,0 +1,22 @@ + +include::overview.asciidoc[] + +include::built-in-roles.asciidoc[] + +include::{xes-repo-dir}/security/authorization/managing-roles.asciidoc[] + +include::privileges.asciidoc[] + +include::document-level-security.asciidoc[] + +include::field-level-security.asciidoc[] + +include::{xes-repo-dir}/security/authorization/alias-privileges.asciidoc[] + +include::{xes-repo-dir}/security/authorization/mapping-roles.asciidoc[] + +include::{xes-repo-dir}/security/authorization/field-and-document-access-control.asciidoc[] + +include::{xes-repo-dir}/security/authorization/run-as-privilege.asciidoc[] + +include::{xes-repo-dir}/security/authorization/custom-authorization.asciidoc[] diff --git a/x-pack/docs/en/security/authorization/overview.asciidoc b/x-pack/docs/en/security/authorization/overview.asciidoc new file mode 100644 index 0000000000000..feb2014e30ee3 --- /dev/null +++ b/x-pack/docs/en/security/authorization/overview.asciidoc @@ -0,0 +1,75 @@ +[role="xpack"] +[[authorization]] +== User authorization + +The {stack-security-features} add _authorization_, which is the process of determining whether the user behind an incoming request is allowed to execute +the request. + +This process takes place after the user is successfully identified and +<>. + +[[roles]] +[float] +=== Role-based access control + +The {security-features} provide a role-based access control (RBAC) mechanism, +which enables you to authorize users by assigning privileges to roles and +assigning roles to users or groups. + +image::security/authorization/images/authorization.png[This image illustrates role-based access control] + +The authorization process revolves around the following constructs: + +_Secured Resource_:: +A resource to which access is restricted. Indices, aliases, documents, fields, +users, and the {es} cluster itself are all examples of secured objects. + +_Privilege_:: +A named group of one or more actions that a user may execute against a +secured resource. Each secured resource has its own sets of available privileges. +For example, `read` is an index privilege that represents all actions that enable +reading the indexed/stored data. For a complete list of available privileges +see <>. + +_Permissions_:: +A set of one or more privileges against a secured resource. Permissions can +easily be described in words, here are few examples: + * `read` privilege on the `products` index + * `manage` privilege on the cluster + * `run_as` privilege on `john` user + * `read` privilege on documents that match query X + * `read` privilege on `credit_card` field + +_Role_:: +A named set of permissions + +_User_:: +The authenticated user. + +_Group_:: +One or more groups to which a user belongs. Groups are not supported in some +realms, such as native, file, or PKI realms. + +A role has a unique name and identifies a set of permissions that translate to +privileges on resources. You can associate a user or group with an arbitrary +number of roles. When you map roles to groups, the roles of a user in that group +are the combination of the roles assigned to that group and the roles assigned +to that user. Likewise, the total set of permissions that a user has is defined +by the union of the permissions in all its roles. + +The method for assigning roles to users varies depending on which realms you use +to authenticate users. For more information, see <>. + +[[attributes]] +[float] +=== Attribute-based access control + +The {security-features} also provide an attribute-based access control (ABAC) +mechanism, which enables you to use attributes to restrict access to documents +in search queries and aggregations. For example, you can assign attributes to +users and documents, then implement an access policy in a role definition. Users +with that role can read a specific document only if they have all the required +attributes. + +For more information, see +https://www.elastic.co/blog/attribute-based-access-control-with-xpack[Document-level attribute-based access control with X-Pack 6.1]. diff --git a/x-pack/docs/en/security/authorization/privileges.asciidoc b/x-pack/docs/en/security/authorization/privileges.asciidoc new file mode 100644 index 0000000000000..b4d8a64e19149 --- /dev/null +++ b/x-pack/docs/en/security/authorization/privileges.asciidoc @@ -0,0 +1,208 @@ +[role="xpack"] +[[security-privileges]] +=== Security privileges + +This section lists the privileges that you can assign to a role. + +[[privileges-list-cluster]] +==== Cluster privileges + +[horizontal] +`all`:: +All cluster administration operations, like snapshotting, node shutdown/restart, +settings update, rerouting, or managing users and roles. + +`create_snapshot`:: +Privileges to create snapshots for existing repositories. Can also list and view +details on existing repositories and snapshots. + +`manage`:: +Builds on `monitor` and adds cluster operations that change values in the cluster. +This includes snapshotting, updating settings, and rerouting. It also includes +obtaining snapshot and restore status. This privilege does not include the +ability to manage security. + +`manage_ccr`:: +All {ccr} operations related to managing follower indices and auto-follow +patterns. It also includes the authority to grant the privileges necessary to +manage follower indices and auto-follow patterns. This privilege is necessary +only on clusters that contain follower indices. + +`manage_ilm`:: +All {Ilm} operations related to managing policies + +`manage_index_templates`:: +All operations on index templates. + +`manage_ingest_pipelines`:: +All operations on ingest node pipelines. + +`manage_ml`:: +All {ml} operations, such as creating and deleting {dfeeds}, jobs, and model +snapshots. ++ +-- +NOTE: {dfeeds-cap} that were created prior to version 6.2 or created when +{security-features} were disabled run as a system user with elevated privileges, +including permission to read all indices. Newer {dfeeds} run with the security +roles of the user who created or updated them. + +-- + +`manage_pipeline`:: +All operations on ingest pipelines. + +`manage_rollup`:: +All rollup operations, including creating, starting, stopping and deleting +rollup jobs. + +`manage_saml`:: +Enables the use of internal {es} APIs to initiate and manage SAML authentication +on behalf of other users. + +`manage_security`:: +All security-related operations such as CRUD operations on users and roles and +cache clearing. + +`manage_token`:: +All security-related operations on tokens that are generated by the {es} Token +Service. + +`manage_watcher`:: +All watcher operations, such as putting watches, executing, activate or acknowledging. ++ +-- +NOTE: Watches that were created prior to version 6.1 or created when the +{security-features} were disabled run as a system user with elevated privileges, +including permission to read and write all indices. Newer watches run with the +security roles of the user who created or updated them. + +-- + +`monitor`:: +All cluster read-only operations, like cluster health and state, hot threads, +node info, node and cluster stats, and pending cluster tasks. + +`monitor_ml`:: +All read only {ml} operations, such as getting information about {dfeeds}, jobs, +model snapshots, or results. + +`monitor_rollup`:: +All read only rollup operations, such as viewing the list of historical and +currently running rollup jobs and their capabilities. + +`monitor_watcher`:: +All read only watcher operations, such as getting a watch and watcher stats. + +`read_ccr`:: +All read only {ccr} operations, such as getting information about indices and +metadata for leader indices in the cluster. It also includes the authority to +check whether users have the appropriate privileges to follow leader indices. +This privilege is necessary only on clusters that contain leader indices. + +`read_ilm`:: +All read only {Ilm} operations, such as getting policies and checking the +status of {Ilm} + +`transport_client`:: +All privileges necessary for a transport client to connect. Required by the remote +cluster to enable <>. + +[[privileges-list-indices]] +==== Indices privileges + +[horizontal] +`all`:: +Any action on an index + +`create`:: +Privilege to index documents. Also grants access to the update mapping +action. ++ +-- +NOTE: This privilege does not restrict the index operation to the creation +of documents but instead restricts API use to the index API. The index API allows a user +to overwrite a previously indexed document. + +-- + +`create_index`:: +Privilege to create an index. A create index request may contain aliases to be +added to the index once created. In that case the request requires the `manage` +privilege as well, on both the index and the aliases names. + +`delete`:: +Privilege to delete documents. + +`delete_index`:: +Privilege to delete an index. + +`index`:: +Privilege to index and update documents. Also grants access to the update +mapping action. + +`manage`:: +All `monitor` privileges plus index administration (aliases, analyze, cache clear, +close, delete, exists, flush, mapping, open, force merge, refresh, settings, +search shards, templates, validate). + +`manage_follow_index`:: +All actions that are required to manage the lifecycle of a follower index, which +includes creating a follower index, closing it, and converting it to a regular +index. This privilege is necessary only on clusters that contain follower indices. + +`manage_ilm`:: +All {Ilm} operations relating to managing the execution of policies of an index +This includes operations like retrying policies, and removing a policy +from an index. + +`manage_leader_index`:: +All actions that are required to manage the lifecycle of a leader index, which +includes {ref}/ccr-post-forget-follower.html[forgetting a follower]. This +privilege is necessary only on clusters that contain leader indices. + +`monitor`:: +All actions that are required for monitoring (recovery, segments info, index +stats and status). + +`read`:: +Read only access to actions (count, explain, get, mget, get indexed scripts, +more like this, multi percolate/search/termvector, percolate, scroll, +clear_scroll, search, suggest, tv). + +`read_cross_cluster`:: +Read only access to the search action from a <>. + +`view_index_metadata`:: +Read-only access to index metadata (aliases, aliases exists, get index, exists, field mappings, +mappings, search shards, type exists, validate, warmers, settings, ilm). This +privilege is primarily available for use by {kib} users. + +`write`:: +Privilege to perform all write operations to documents, which includes the +permission to index, update, and delete documents as well as performing bulk +operations. Also grants access to the update mapping action. + + +==== Run as privilege + +The `run_as` permission enables an authenticated user to submit requests on +behalf of another user. The value can be a user name or a comma-separated list +of user names. (You can also specify users as an array of strings or a YAML +sequence.) For more information, see +<>. + +[[application-privileges]] +==== Application privileges + +Application privileges are managed within {es} and can be retrieved with the +{ref}/security-api-has-privileges.html[has privileges API] and the +{ref}/security-api-get-privileges.html[get application privileges API]. They do +not, however, grant access to any actions or resources within {es}. Their +purpose is to enable applications to represent and store their own privilege +models within {es} roles. + +To create application privileges, use the +{ref}/security-api-put-privileges.html[add application privileges API]. You can +then associate these application privileges with roles, as described in +<>. diff --git a/x-pack/docs/en/security/get-started-builtin-users.asciidoc b/x-pack/docs/en/security/get-started-builtin-users.asciidoc new file mode 100644 index 0000000000000..0f8d109d58eae --- /dev/null +++ b/x-pack/docs/en/security/get-started-builtin-users.asciidoc @@ -0,0 +1,27 @@ +There are <> that you can use for specific +administrative purposes: `apm_system`, `beats_system`, `elastic`, `kibana`, +`logstash_system`, and `remote_monitoring_user`. + +Before you can use them, you must set their passwords: + +. Restart {es}. For example, if you installed {es} with a `.tar.gz` package, run +the following command from the {es} directory: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/elasticsearch +---------------------------------------------------------------------- + +See {ref}/starting-elasticsearch.html[Starting {es}]. +-- + +. Set the built-in users' passwords. Run the following command from the {es} +directory: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/elasticsearch-setup-passwords interactive +---------------------------------------------------------------------- +-- diff --git a/x-pack/docs/en/security/get-started-enable-security.asciidoc b/x-pack/docs/en/security/get-started-enable-security.asciidoc new file mode 100644 index 0000000000000..bbe2999fc6753 --- /dev/null +++ b/x-pack/docs/en/security/get-started-enable-security.asciidoc @@ -0,0 +1,35 @@ +When you use the basic and trial licenses, the {es} {security-features} are +disabled by default. To enable them: + +. Stop {kib}. The method for starting and stopping {kib} varies depending on +how you installed it. For example, if you installed {kib} from an archive +distribution (`.tar.gz` or `.zip`), stop it by entering `Ctrl-C` on the command +line. See {kibana-ref}/start-stop.html[Starting and stopping {kib}]. + +. Stop {es}. For example, if you installed {es} from an archive distribution, +enter `Ctrl-C` on the command line. See +{ref}/stopping-elasticsearch.html[Stopping {es}]. + +. Add the `xpack.security.enabled` setting to the +`ES_PATH_CONF/elasticsearch.yml` file. ++ +-- +TIP: The `ES_PATH_CONF` environment variable contains the path for the {es} +configuration files. If you installed {es} using archive distributions (`zip` or +`tar.gz`), it defaults to `ES_HOME/config`. If you used package distributions +(Debian or RPM), it defaults to `/etc/elasticsearch`. For more information, see +{ref}/settings.html[Configuring {es}]. + +For example, add the following setting: + +[source,yaml] +---- +xpack.security.enabled: true +---- + +TIP: If you have a basic or trial license, the default value for this setting is +`false`. If you have a gold or higher license, the default value is `true`. +Therefore, it is a good idea to explicitly add this setting to avoid confusion +about whether {security-features} are enabled. + +-- diff --git a/x-pack/docs/en/security/get-started-kibana-users.asciidoc b/x-pack/docs/en/security/get-started-kibana-users.asciidoc new file mode 100644 index 0000000000000..9cb89f6976a68 --- /dev/null +++ b/x-pack/docs/en/security/get-started-kibana-users.asciidoc @@ -0,0 +1,60 @@ +When the {es} {security-features} are enabled, users must log in to {kib} +with a valid user ID and password. + +{kib} also performs some tasks under the covers that require use of the +built-in `kibana` user. + +. Configure {kib} to use the built-in `kibana` user and the password that you +created: + +** If you don't mind having passwords visible in your configuration file, +uncomment and update the following settings in the `kibana.yml` file in your +{kib} directory: ++ +-- +TIP: If you installed {kib} using archive distributions (`zip` or +`tar.gz`), the `kibana.yml` configuration file is in `KIBANA_HOME/config`. If +you used package distributions (Debian or RPM), it's in `/etc/kibana`. For more +information, see {kibana-ref}/settings.html[Configuring {kib}]. + +For example, add the following settings: + +[source,yaml] +---- +elasticsearch.username: "kibana" +elasticsearch.password: "your_password" +---- + +Specify the password that you set with the `elasticsearch-setup-passwords` +command then save your changes to the file. +-- + +** If you prefer not to put your user ID and password in the `kibana.yml` file, +store them in a keystore instead. Run the following commands to create the {kib} +keystore and add the secure settings: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/kibana-keystore create +./bin/kibana-keystore add elasticsearch.username +./bin/kibana-keystore add elasticsearch.password +---------------------------------------------------------------------- + +When prompted, specify the `kibana` built-in user and its password for these +setting values. The settings are automatically applied when you start {kib}. +To learn more, see {kibana-ref}/secure-settings.html[Secure settings]. +-- + +. Restart {kib}. For example, if you installed +{kib} with a `.tar.gz` package, run the following command from the {kib} +directory: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/kibana +---------------------------------------------------------------------- + +See {kibana-ref}/start-stop.html[Starting and stopping {kib}]. +-- \ No newline at end of file diff --git a/x-pack/docs/en/security/get-started-security.asciidoc b/x-pack/docs/en/security/get-started-security.asciidoc new file mode 100644 index 0000000000000..141dfd6860ad9 --- /dev/null +++ b/x-pack/docs/en/security/get-started-security.asciidoc @@ -0,0 +1,380 @@ +[role="xpack"] +[testenv="basic"] +[[security-getting-started]] +== Tutorial: Getting started with security + +In this tutorial, you learn how to secure a cluster by configuring users and +roles in {es}, {kib}, {ls}, and {metricbeat}. + +[float] +[[get-started-security-prerequisites]] +=== Before you begin + +. Install and configure {es}, {kib}, {ls}, and {metricbeat} as described in +{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}]. ++ +-- +IMPORTANT: To complete this tutorial, you must install the default {es} and +{kib} packages, which include role-based access control (RBAC) and native +authentication {security-features}. When you install these products, they apply +basic licenses with no expiration dates. All of the subsequent steps in this +tutorial assume that you are using a basic license. For more information, see +{subscriptions} and <>. + +-- + +. Stop {ls}. The method for starting and stopping {ls} varies depending on whether +you are running it from the command line or running it as a service. For example, +if you are running {ls} from the command line, you can stop it by entering +`Ctrl-C`. See {logstash-ref}/shutdown.html[Shutting down {ls}]. + +. Stop {metricbeat}. For example, enter `Ctrl-C` on the command line where it is +running. + +. Launch the {kib} web interface by pointing your browser to port 5601. For +example, http://127.0.0.1:5601[http://127.0.0.1:5601]. + +[role="xpack"] +[[get-started-enable-security]] +=== Enable {es} {security-features} + +include::get-started-enable-security.asciidoc[] + +. Enable single-node discovery in the `ES_PATH_CONF/elasticsearch.yml` file. ++ +-- +This tutorial involves a single node cluster, but if you had multiple +nodes, you would enable {es} {security-features} on every node in the cluster +and configure Transport Layer Security (TLS) for internode-communication, which +is beyond the scope of this tutorial. By enabling single-node discovery, we are +postponing the configuration of TLS. For example, add the following setting: + +[source,yaml] +---- +discovery.type: single-node +---- + +For more information, see +{ref}/bootstrap-checks.html#single-node-discovery[Single-node discovery]. +-- + +When you enable {es} {security-features}, basic authentication is enabled by +default. To communicate with the cluster, you must specify a username and +password. Unless you <>, all requests +that don't include a user name and password are rejected. + +[role="xpack"] +[[get-started-built-in-users]] +=== Create passwords for built-in users + +include::get-started-builtin-users.asciidoc[] + +You need these built-in users in subsequent steps, so choose passwords that you +can remember! + +NOTE: This tutorial does not use the built-in `apm_system`, `logstash_system`, +`beats_system`, and `remote_monitoring_user` users, which are typically +associated with monitoring. For more information, see +{logstash-ref}/ls-security.html#ls-monitoring-user[Configuring credentials for {ls} monitoring] +and {metricbeat-ref}/monitoring.html[Monitoring {metricbeat}]. + +[role="xpack"] +[[get-started-kibana-user]] +=== Add the built-in user to {kib} + +include::get-started-kibana-users.asciidoc[] + +[role="xpack"] +[[get-started-authentication]] +=== Configure authentication + +Now that you've set up the built-in users, you need to decide how you want to +manage all the other users. + +The {stack} _authenticates_ users to ensure that they are valid. The +authentication process is handled by _realms_. You can use one or more built-in +realms, such as the native, file, LDAP, PKI, Active Directory, SAML, or Kerberos +realms. Alternatively, you can create your own custom realms. In this tutorial, +we'll use a native realm. + +In general, you configure realms by adding `xpack.security.authc.realms` +settings in the `elasticsearch.yml` file. However, the native realm is available +by default when no other realms are configured. Therefore, you don't need to do +any extra configuration steps in this tutorial. You can jump straight to +creating users! + +If you want to learn more about authentication and realms, see +<>. + +[role="xpack"] +[[get-started-users]] +=== Create users + +Let's create two users in the native realm. + +. Log in to {kib} with the `elastic` built-in user. + +. Go to the *Management / Security / Users* page: ++ +-- +[role="screenshot"] +image::security/images/management-builtin-users.jpg["User management screenshot in Kibana"] + +In this example, you can see a list of built-in users. +-- + +. Click *Create new user*. For example, create a user for yourself: ++ +-- +[role="screenshot"] +image::security/images/create-user.jpg["Creating a user in Kibana"] + +You'll notice that when you create a user, you can assign it a role. Don't +choose a role yet--we'll come back to that in subsequent steps. +-- + +. Click *Create new user* and create a `logstash_internal` user. ++ +-- +In {stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}], +you configured {ls} to listen for {metricbeat} +input and to send the events to {es}. You therefore need to create a user +that {ls} can use to communicate with {es}. For example: + +[role="screenshot"] +image::security/images/create-logstash-user.jpg["Creating a {ls} user in {kib}"] +-- + +[role="xpack"] +[[get-started-roles]] +=== Assign roles + +By default, all users can change their own passwords, get information about +themselves, and run the `authenticate` API. If you want them to do more than +that, you need to give them one or more _roles_. + +Each role defines a specific set of actions (such as read, create, or delete) +that can be performed on specific secured resources (such as indices, aliases, +documents, fields, or clusters). To help you get up and running, there are +built-in roles. + +Go to the *Management / Security / Roles* page to see them: + +[role="screenshot"] +image::security/images/management-roles.jpg["Role management screenshot in Kibana"] + +Select a role to see more information about its privileges. For example, select +the `kibana_system` role to see its list of cluster and index privileges. To +learn more, see <>. + +Let's assign the `kibana_user` role to your user. Go back to the +*Management / Security / Users* page and select your user. Add the `kibana_user` +role and save the change. For example: + +[role="screenshot"] +image::security/images/assign-role.jpg["Assigning a role to a user in Kibana"] + +This user now has access to all features in {kib}. For more information about granting +access to Kibana see {kibana-ref}/xpack-security-authorization.html[Kibana Authorization]. + +If you completed all of the steps in +{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}], you should +have {metricbeat} data stored in {es}. Let's create two roles that grant +different levels of access to that data. + +Go to the *Management / Security / Roles* page and click *Create role*. + +Create a `metricbeat_reader` role that has `read` and `view_index_metadata` +privileges on the `metricbeat-*` indices: + +[role="screenshot"] +image::security/images/create-reader-role.jpg["Creating a role in Kibana"] + +Create a `metricbeat_writer` role that has `manage_index_templates` and `monitor` +cluster privileges, as well as `write`, `delete`, and `create_index` privileges +on the `metricbeat-*` indices: + +[role="screenshot"] +image::security/images/create-writer-role.jpg["Creating another role in Kibana"] + +Now go back to the *Management / Security / Users* page and assign these roles +to the appropriate users. Assign the `metricbeat_reader` role to your personal +user. Assign the `metricbeat_writer` role to the `logstash_internal` user. + +The list of users should now contain all of the built-in users as well as the +two you created. It should also show the appropriate roles for your users: + +[role="screenshot"] +image::security/images/management-users.jpg["User management screenshot in Kibana"] + +If you want to learn more about authorization and roles, see <>. + +[role="xpack"] +[[get-started-logstash-user]] +=== Add user information in {ls} + +In order for {ls} to send data successfully to {es}, you must configure its +authentication credentials in the {ls} configuration file. + +. Configure {ls} to use the `logstash_internal` user and the password that you +created: + +** If you don't mind having passwords visible in your configuration file, add +the following `user` and `password` settings in the `demo-metrics-pipeline.conf` +file in your {ls} directory: ++ +-- +[source,ruby] +---- +... + +output { + elasticsearch { + hosts => "localhost:9200" + manage_template => false + index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" + user => "logstash_internal" <1> + password => "your_password" <2> + } +} +---- +<1> Specify the `logstash_internal` user that you created earlier in this tutorial. +<2> Specify the password that you chose for this user ID. +-- + +** If you prefer not to put your user ID and password in the configuration file, +store them in a keystore instead. ++ +-- +Run the following commands to create the {ls} +keystore and add the secure settings: + +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +set +o history +export LOGSTASH_KEYSTORE_PASS=mypassword <1> +set -o history +./bin/logstash-keystore create +./bin/logstash-keystore add ES_USER +./bin/logstash-keystore add ES_PWD +---------------------------------------------------------------------- +<1> You can optionally protect access to the {ls} keystore by storing a password +in an environment variable called `LOGSTASH_KEYSTORE_PASS`. For more information, +see {logstash-ref}/keystore.html#keystore-password[Keystore password]. + +When prompted, specify the `logstash_internal` user and its password for the +`ES_USER` and `ES_PWD` values. + +NOTE: The {ls} keystore differs from the {kib} keystore. Whereas the {kib} +keystore enables you to store `kibana.yml` settings by name, the {ls} keystore +enables you to create arbitrary names that you can reference in the {ls} +configuration. To learn more, see +{logstash-ref}/keystore.html[Secrets keystore for secure settings]. + +You can now use these `ES_USER` and `ES_PWD` keys in your configuration +file. For example, add the `user` and `password` settings in the +`demo-metrics-pipeline.conf` file as follows: + +[source,ruby] +---- +... + +output { + elasticsearch { + hosts => "localhost:9200" + manage_template => false + index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" + user => "${ES_USER}" + password => "${ES_PWD}" + } +} +---- +-- + +. Start {ls} by using the appropriate method for your environment. ++ +-- +For example, to +run {ls} from a command line, go to the {ls} directory and enter the following +command: + +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/logstash -f demo-metrics-pipeline.conf +---------------------------------------------------------------------- + +To start {ls} as a service, see +{logstash-ref}/running-logstash.html[Running {ls} as a service on Debian or RPM]. +-- + +. If you were connecting directly from {metricbeat} to {es}, you would need +to configure authentication credentials for the {es} output in the {metricbeat} +configuration file. In +{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}], +however, you configured +{metricbeat} to send the data to {ls} for additional parsing, so no extra +settings are required in {metricbeat}. For more information, see +{metricbeat-ref}/securing-metricbeat.html[Securing {metricbeat}]. + +. Start {metricbeat} by using the appropriate method for your environment. ++ +-- +For example, on macOS, run the following command from the {metricbeat} directory: + +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./metricbeat -e +---------------------------------------------------------------------- + +For more methods, see {metricbeat-ref}/metricbeat-starting.html[Starting {metricbeat}]. +-- + +Wait a few minutes for new data to be sent from {metricbeat} to {ls} and {es}. + +[role="xpack"] +[[get-started-verify-users]] +=== View system metrics in {kib} + +Log in to {kib} with the user ID that has `metricbeat_reader` and `kibana_user` +roles (for example, `jdoe`). + +These roles enable the user to see the system metrics in {kib} (for example, on +the *Discover* page or in the +http://localhost:5601/app/kibana#/dashboard/Metricbeat-system-overview[{metricbeat} system overview dashboard]). + +[float] +[[gs-security-nextsteps]] +=== What's next? + +Congratulations! You've successfully set up authentication and authorization by +using the native realm. You learned how to create user IDs and roles that +prevent unauthorized access to the {stack}. + +Next, you'll want to try other features that are unlocked by your trial license, +such as {ml}. See <>. + +Later, when you're ready to increase the number of nodes in your cluster or set +up an production environment, you'll want to encrypt communications across the +{stack}. To learn how, read <>. + +For more detailed information about securing the {stack}, see: + +* {ref}/configuring-security.html[Configuring security in {es}]. Encrypt +inter-node communications, set passwords for the built-in users, and manage your +users and roles. + +* {kibana-ref}/using-kibana-with-security.html[Configuring security in {kib}]. +Set the authentication credentials in {kib} and encrypt communications between +the browser and the {kib} server. + +* {logstash-ref}/ls-security.html[Configuring security in Logstash]. Set the +authentication credentials for Logstash and encrypt communications between +Logstash and {es}. + +* <>. Configure authentication +credentials and encrypt connections to {es}. + +* <>. + +* {hadoop-ref}/security.html[Configuring {es} for Apache Hadoop to use secured transport]. + diff --git a/x-pack/docs/en/security/how-security-works.asciidoc b/x-pack/docs/en/security/how-security-works.asciidoc new file mode 100644 index 0000000000000..e05991cac3b7d --- /dev/null +++ b/x-pack/docs/en/security/how-security-works.asciidoc @@ -0,0 +1,50 @@ +[role="xpack"] +[[how-security-works]] +== How security works + +An Elasticsearch cluster is typically made out of many moving parts. There are +the Elasticsearch nodes that form the cluster and often Logstash instances, +Kibana instances, Beats agents, and clients all communicating with the cluster. +It should not come as a surprise that securing such clusters has many facets and +layers. + +The {stack-security-features} provide the means to secure the Elastic cluster +on several levels: + + * <> + * <> + * Node/client authentication and channel encryption + * Auditing + +[float] +=== Node/client authentication and channel encryption + +The {security-features} support configuring SSL/TLS for securing the +communication channels to, from and within the cluster. This support accounts for: + + * Encryption of data transmitted over the wires + * Certificate based node authentication - preventing unauthorized nodes/clients + from establishing a connection with the cluster. + +For more information, see <>. + +The {security-features} also enable you to <> +which can be seen as a light mechanism for node/client authentication. With IP +filtering, you can restrict the nodes and clients that can connect to the +cluster based on their IP addresses. The IP filters configuration provides +whitelisting and blacklisting of IPs, subnets and DNS domains. + + +[float] +=== Auditing +When dealing with any secure system, it is critical to have a audit trail +mechanism set in place. Audit trails log various activities/events that occur in +the system, enabling you to analyze and back track past events when things go +wrong (e.g. security breach). + +The {security-features} provide such audit trail functionality for all nodes in +the cluster. You can configure the audit level which accounts for the type of +events that are logged. These events include failed authentication attempts, +user access denied, node connection denied, and more. + +For more information on auditing see <>. diff --git a/x-pack/docs/en/security/images/assign-role.jpg b/x-pack/docs/en/security/images/assign-role.jpg new file mode 100644 index 0000000000000..4771aa3b84f09 Binary files /dev/null and b/x-pack/docs/en/security/images/assign-role.jpg differ diff --git a/x-pack/docs/en/security/images/create-logstash-user.jpg b/x-pack/docs/en/security/images/create-logstash-user.jpg new file mode 100644 index 0000000000000..938ccb72ea3cf Binary files /dev/null and b/x-pack/docs/en/security/images/create-logstash-user.jpg differ diff --git a/x-pack/docs/en/security/images/create-reader-role.jpg b/x-pack/docs/en/security/images/create-reader-role.jpg new file mode 100644 index 0000000000000..4d301fcfe910e Binary files /dev/null and b/x-pack/docs/en/security/images/create-reader-role.jpg differ diff --git a/x-pack/docs/en/security/images/create-user.jpg b/x-pack/docs/en/security/images/create-user.jpg new file mode 100644 index 0000000000000..1ce905f3f545d Binary files /dev/null and b/x-pack/docs/en/security/images/create-user.jpg differ diff --git a/x-pack/docs/en/security/images/create-writer-role.jpg b/x-pack/docs/en/security/images/create-writer-role.jpg new file mode 100644 index 0000000000000..25ec820f36624 Binary files /dev/null and b/x-pack/docs/en/security/images/create-writer-role.jpg differ diff --git a/x-pack/docs/en/security/images/management-builtin-users.jpg b/x-pack/docs/en/security/images/management-builtin-users.jpg new file mode 100644 index 0000000000000..ec39d1f2b46dd Binary files /dev/null and b/x-pack/docs/en/security/images/management-builtin-users.jpg differ diff --git a/x-pack/docs/en/security/images/management-roles.jpg b/x-pack/docs/en/security/images/management-roles.jpg new file mode 100644 index 0000000000000..f8bb4af7d3f56 Binary files /dev/null and b/x-pack/docs/en/security/images/management-roles.jpg differ diff --git a/x-pack/docs/en/security/images/management-users.jpg b/x-pack/docs/en/security/images/management-users.jpg new file mode 100644 index 0000000000000..bea27be54a84c Binary files /dev/null and b/x-pack/docs/en/security/images/management-users.jpg differ diff --git a/x-pack/docs/en/security/index.asciidoc b/x-pack/docs/en/security/index.asciidoc new file mode 100644 index 0000000000000..320342dec13be --- /dev/null +++ b/x-pack/docs/en/security/index.asciidoc @@ -0,0 +1,109 @@ +[role="xpack"] +[[elasticsearch-security]] += Securing the {stack} + +[partintro] +-- +The {stack-security-features} enable you to easily secure a cluster. You can +password-protect your data as well as implement more advanced security +measures such as encrypting communications, role-based access control, +IP filtering, and auditing. This guide describes how to configure the security +features you need, and interact with your secured cluster. + +Security protects Elasticsearch clusters by: + +* <> + with password protection, role-based access control, and IP filtering. +* <> + with message authentication and SSL/TLS encryption. +* <> + so you know who's doing what to your cluster and the data it stores. + +[float] +[[preventing-unauthorized-access]] +=== Preventing unauthorized access + +To prevent unauthorized access to your Elasticsearch cluster, you must have a +way to _authenticate_ users. This simply means that you need a way to validate +that a user is who they claim to be. For example, you have to make sure only +the person named _Kelsey Andorra_ can sign in as the user `kandorra`. The +{es-security-features} provide a standalone authentication mechanism that enables +you to quickly password-protect your cluster. If you're already using +<>, <>, or +<> to manage users in your organization, the {security-features} +are able to integrate with those systems to perform user authentication. + +In many cases, simply authenticating users isn't enough. You also need a way to +control what data users have access to and what tasks they can perform. The +{es-security-features} enable you to _authorize_ users by assigning access +_privileges_ to _roles_ and assigning those roles to users. For example, this +<> mechanism (a.k.a RBAC) enables +you to specify that the user `kandorra` can only perform read operations on the +`events` index and can't do anything at all with other indices. + +The {security-features} also support <>. +You can whitelist and blacklist specific IP addresses or subnets to control +network-level access to a server. + +[float] +[[preserving-data-integrity]] +=== Preserving data integrity + +A critical part of security is keeping confidential data confidential. +Elasticsearch has built-in protections against accidental data loss and +corruption. However, there's nothing to stop deliberate tampering or data +interception. The {stack-security-features} preserve the integrity of your +data by <> to and from nodes. For even +greater protection, you can increase the <> and +<>. + + +[float] +[[maintaining-audit-trail]] +=== Maintaining an audit trail + +Keeping a system secure takes vigilance. By using {stack-security-features} to +maintain an audit trail, you can easily see who is accessing your cluster and +what they're doing. By analyzing access patterns and failed attempts to access +your cluster, you can gain insights into attempted attacks and data breaches. +Keeping an auditable log of the activity in your cluster can also help diagnose +operational issues. + +[float] +=== Where to Go Next + +* <> + steps through how to install and start using Security for basic authentication. + +* <> + provides more information about how Security supports user authentication, + authorization, and encryption. + +* <> + shows you how to interact with an Elasticsearch cluster protected by the + {stack-security-features}. + +[float] +=== Have Comments, Questions, or Feedback? + +Head over to our {security-forum}[Security Discussion Forum] +to share your experience, questions, and suggestions. +-- + +include::how-security-works.asciidoc[] + +include::authentication/index.asciidoc[] + +include::authorization/index.asciidoc[] + +include::{xes-repo-dir}/security/auditing/index.asciidoc[] + +include::{xes-repo-dir}/security/securing-communications.asciidoc[] + +include::{xes-repo-dir}/security/using-ip-filtering.asciidoc[] + +include::{xes-repo-dir}/security/tribe-clients-integrations.asciidoc[] + +include::get-started-security.asciidoc[] + +include::securing-communications/tutorial-tls-intro.asciidoc[] diff --git a/x-pack/docs/en/security/limitations.asciidoc b/x-pack/docs/en/security/limitations.asciidoc new file mode 100644 index 0000000000000..0d075847ca89d --- /dev/null +++ b/x-pack/docs/en/security/limitations.asciidoc @@ -0,0 +1,92 @@ +[role="xpack"] +[[security-limitations]] +== Security limitations + +[float] +=== Plugins + +Elasticsearch's plugin infrastructure is extremely flexible in terms of what can +be extended. While it opens up Elasticsearch to a wide variety of (often custom) +additional functionality, when it comes to security, this high extensibility level +comes at a cost. We have no control over the third-party plugins' code (open +source or not) and therefore we cannot guarantee their compliance with +{stack-security-features}. For this reason, third-party plugins are not +officially supported on clusters with {security-features} enabled. + +[float] +=== Changes in index wildcard behavior + +Elasticsearch clusters with {security-features} enabled apply the `/_all` +wildcard, and all other wildcards, to the indices that the current user has +privileges for, not the set of all indices on the cluster. While creating or +retrieving aliases by providing wildcard expressions for alias names, if there +are no existing authorized aliases that match the wildcard expression provided +an IndexNotFoundException is returned. + +[float] +=== Multi document APIs + +Multi get and multi term vectors API throw IndexNotFoundException when trying to access non existing indices that the user is +not authorized for. By doing that they leak information regarding the fact that the index doesn't exist, while the user is not +authorized to know anything about those indices. + +[float] +=== Filtered index aliases + +Aliases containing filters are not a secure way to restrict access to individual +documents, due to the limitations described in +<>. +The {stack-security-features} provide a secure way to restrict access to +documents through the +<> feature. + +[float] +=== Field and document level security limitations + +When a user's role enables document or field level security for an index: + +* The user cannot perform write operations: +** The update API isn't supported. +** Update requests included in bulk requests aren't supported. +* The request cache is disabled for search requests. + +When a user's role enables document level security for an index: + +* Document level security isn't applied for APIs that aren't document based. + An example is the field stats API. +* Document level security doesn't affect global index statistics that relevancy + scoring uses. So this means that scores are computed without taking the role + query into account. Note that documents not matching with the role query are + never returned. +* The `has_child` and `has_parent` queries aren't supported as query in the + role definition. The `has_child` and `has_parent` queries can be used in the + search API with document level security enabled. +* Any query that makes remote calls to fetch data to query by isn't supported. + The following queries aren't supported: +** The `terms` query with terms lookup isn't supported. +** The `geo_shape` query with indexed shapes isn't supported. +** The `percolate` query isn't supported. +* If suggesters are specified and document level security is enabled then + the specified suggesters are ignored. +* A search request cannot be profiled if document level security is enabled. + +[float] +[[alias-limitations]] +=== Index and field names can be leaked when using aliases + +Calling certain Elasticsearch APIs on an alias can potentially leak information +about indices that the user isn't authorized to access. For example, when you get +the mappings for an alias with the `_mapping` API, the response includes the +index name and mappings for each index that the alias applies to. + +Until this limitation is addressed, avoid index and field names that contain +confidential or sensitive information. + +[float] +=== LDAP realm + +The <> does not currently support the discovery of nested +LDAP Groups. For example, if a user is a member of `group_1` and `group_1` is a +member of `group_2`, only `group_1` will be discovered. However, the +<> *does* support transitive +group membership. diff --git a/x-pack/docs/en/security/securing-communications/tutorial-tls-addnodes.asciidoc b/x-pack/docs/en/security/securing-communications/tutorial-tls-addnodes.asciidoc new file mode 100644 index 0000000000000..707a8501102bc --- /dev/null +++ b/x-pack/docs/en/security/securing-communications/tutorial-tls-addnodes.asciidoc @@ -0,0 +1,162 @@ +[role="xpack"] +[[encrypting-communications-hosts]] +=== Add nodes to your cluster + +Up to this point, we have used a cluster with a single {es} node to get up and +running with the {stack}. An {es} _node_ is a single server that is part of your +cluster and stores pieces of your data called _shards_. + +You can add more nodes to your cluster and optionally designate specific purposes +for each node. For example, you can allocate master nodes, data nodes, ingest +nodes, machine learning nodes, and dedicated coordinating nodes. For details +about each node type, see {ref}/modules-node.html[Nodes]. + +In a single cluster, you can have as many nodes as you want but they must be +able to communicate with each other. The communication between nodes in a +cluster is handled by the {ref}/modules-transport.html[transport module]. To +secure your cluster, you must ensure that the internode communications are +encrypted. + +NOTE: In this tutorial, we add more nodes by installing more copies of {es} on +the same machine. By default, {es} binds to loopback addresses for HTTP and +transport communication. That is fine for the purposes of this tutorial and for +downloading and experimenting with {es} in a test or development environment. +When you are deploying a production environment, however, you are generally +adding nodes on different machines so that your cluster is resilient to outages +and avoids data loss. In a production scenario, there are additional +requirements that are not covered in this tutorial. See +{ref}/bootstrap-checks.html#dev-vs-prod-mode[Development vs production mode] and +{ref}/add-elasticsearch-nodes.html[Adding nodes to your cluster]. + +Let's add two nodes to our cluster! + +. Install two additional copies of {es}. It's possible to run multiple {es} +nodes using a shared installation. In this tutorial, however, we're keeping +things simple by using the `zip` or `tar.gz` packages and by putting each copy +in a separate folder. You can simply repeat the steps that you used to install +{es} in the +{stack-gs}/get-started-elastic-stack.html#install-elasticsearch[Getting started with the {stack}] +tutorial. + +. Update the `ES_PATH_CONF/elasticsearch.yml` file on each node: ++ +-- +.. Enable the {es} {security-features}. +.. Ensure that the nodes share the same {ref}/cluster.name.html[`cluster.name`]. +.. Give each node a unique {ref}/node.name.html[`node.name`]. +.. Specify the minimum number of master-eligible nodes that must be available to +form a cluster. By default, each node is eligible to be elected as the +{ref}/modules-node.html#master-node[master node] and control the cluster. To +avoid a _split brain_ scenario where multiple nodes elect themselves as the +master, use the `discovery.zen.minimum_master_nodes` setting. + +By default, if you run multiple {es} nodes on the same machine, it +automatically uses free ports in the range 9200-9300 for HTTP and 9300-9400 for +transport. If you want to assign specific port numbers to each node, however, +you can add {ref}/modules-transport.html[TCP transport settings]. You can then +provide a list of these {ref}/modules-discovery-zen.html#discovery-seed-nodes[seed nodes], +which is used to discover the nodes in your cluster. + +For example, add the following settings to the `ES_PATH_CONF/elasticsearch.yml` +file on the first node: + +[source,yaml] +---- +xpack.security.enabled: true +cluster.name: test-cluster +node.name: node-1 +discovery.zen.minimum_master_nodes: 2 +transport.tcp.port: 9301 +discovery.zen.ping.unicast.hosts: ["localhost:9302", "localhost:9303"] +---- + +Add the following settings to the `ES_PATH_CONF/elasticsearch.yml` +file on the second node: + +[source,yaml] +---- +xpack.security.enabled: true +cluster.name: test-cluster +node.name: node-2 +discovery.zen.minimum_master_nodes: 2 +transport.tcp.port: 9302 +discovery.zen.ping.unicast.hosts: ["localhost:9301", "localhost:9303"] +---- + +Add the following settings to the `ES_PATH_CONF/elasticsearch.yml` +file on the third node: + +[source,yaml] +---- +xpack.security.enabled: true +cluster.name: test-cluster +node.name: node-3 +discovery.zen.minimum_master_nodes: 2 +transport.tcp.port: 9303 +discovery.zen.ping.unicast.hosts: ["localhost:9301", "localhost:9302"] +---- + +TIP: In these examples, we have not specified the `transport.host`, +`transport.bind_host`, or `transport.publish_host` settings, so they default to +the `network.host` value. If you have not specified the `network.host` setting, +it defaults to `_local_`, which represents the loopback addresses for the system. + +If you choose different cluster names, node names, host names, or ports, you +must substitute the appropriate values in subsequent steps as well. +-- + +. Start each {es} node. For example, if you installed {es} with a `.tar.gz` +package, run the following command from each {es} directory: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/elasticsearch +---------------------------------------------------------------------- + +See {ref}/starting-elasticsearch.html[Starting {es}]. + +-- + +. (Optional) Restart {kib}. For example, if you installed +{kib} with a `.tar.gz` package, run the following command from the {kib} +directory: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/kibana +---------------------------------------------------------------------- + +See {kibana-ref}/start-stop.html[Starting and stopping {kib}]. +-- + +. Verify that your cluster now contains three nodes. For example, use the +{ref}/cluster-health.html[cluster health API]: ++ +-- +[source,js] +---------------------------------- +GET _cluster/health +---------------------------------- +// CONSOLE + +Confirm the `number_of_nodes` in the response from this API. + +You can also use the {ref}/cat-nodes.html[cat nodes API] to identify the master +node: + +[source,js] +---------------------------------- +GET _cat/nodes?v +---------------------------------- +// CONSOLE + +The node that has an asterisk(*) in the `master` column is the elected master +node. +-- + +Now that you have multiple nodes, your data can be distributed across the +cluster in multiple primary and replica shards. For more information about the +concepts of clusters, nodes, and shards, see +{ref}/getting-started.html[Getting started with {es}]. diff --git a/x-pack/docs/en/security/securing-communications/tutorial-tls-certificates.asciidoc b/x-pack/docs/en/security/securing-communications/tutorial-tls-certificates.asciidoc new file mode 100644 index 0000000000000..b9cbc4482350a --- /dev/null +++ b/x-pack/docs/en/security/securing-communications/tutorial-tls-certificates.asciidoc @@ -0,0 +1,128 @@ +[role="xpack"] +[[encrypting-communications-certificates]] +=== Generate certificates + +In a secured cluster, {es} nodes use certificates to identify themselves when +communicating with other nodes. + +The cluster must validate the authenticity of these certificates. The +recommended approach is to trust a specific certificate authority (CA). Thus +when nodes are added to your cluster they just need to use a certificate signed +by the same CA. + +. Use the `elasticsearch-certutil` command to generate a CA and certificates and +private keys for each node in your cluster. ++ +-- +You can let the tool prompt you for information about each node in your cluster, +or you can supply that information in an input file. For example, create a +`test-cluster.yml` file in one of your {es} nodes: + +[source,yaml] +---- +instances: + - name: "node-1" <1> + dns: + - "localhost" + ip: + - "127.0.0.1" + - "::1" + - name: "node-2" + dns: + - "localhost" + ip: + - "127.0.0.1" + - "::1" + - name: "node-3" + dns: + - "localhost" + ip: + - "127.0.0.1" + - "::1" +---- +<1> If these `name` values match the values you specified for `node.name` in +each `elasticsearch.yml` file, you can use a shortcut in a subsequent step. + +TIP: In this tutorial, all three nodes exist on the same machine and share the +same IP address and hostname. In general, clusters are more resilient when they +contain nodes from multiple servers and this list would reflect that diversity. + +For information about all of the possible fields in this file, see +{ref}/certutil.html#certutil-silent[Using elasticsearch-certutil in silent mode]. + +Then run the following command: + +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/elasticsearch-certutil cert --in test-cluster.yml --keep-ca-key +---------------------------------------------------------------------- + +It prompts you for passwords to secure each output file. + +TIP: Ideally, you should use a different password for each file and store the +files securely--especially the CA, since it holds the key to your cluster. + +-- + +. Decompress the `certificate-bundle.zip` file. For example: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +unzip certificate-bundle.zip + +Archive: certificate-bundle.zip + creating: ca/ + inflating: ca/ca.p12 + creating: node-1/ + inflating: node-1/node-1.p12 + creating: node-2/ + inflating: node-2/node-2.p12 + creating: node-3/ + inflating: node-3/node-3.p12 +---------------------------------------------------------------------- + +The `certificate-bundle.zip` file contains a folder for each of your nodes and a +`ca` folder. + +The `ca` folder contains a `ca.p12` file, which is a PKCS#12 keystore. This file +contains the public certificate for your certificate authority and the private +key that is used to sign the node certificates. + +Each node folder contains a single PKCS#12 keystore that includes a node +certificate, node key, and CA certificate. +-- + +. Create a folder to contain certificates in the configuration +directory on each {es} node. For example, create a `certs` folder in the `config` +directory on each node. + +. Copy the appropriate certificate to the configuration directory on each {es} +node. For example, copy the `node-1.p12` file into the `config/certs` directory +on the first node. Copy the `node-2.p12` file to the second node and the +`node-3.p12` to the third. + +If you later add more nodes, they just need to use a certificate signed by the +same CA. For this reason, make sure you store your CA in a safe place and don't +forget its password! + +For example: +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/elasticsearch-certutil cert --ca ca/ca.p12 \ <1> +--name \ <2> +--dns \ <3> +--ip <4> +---------------------------------------------------------------------- +<1> The certificate authority that you generated for this cluster. +<2> The name of the generated certificate. Ideally this value matches the new +node's `node.name` value in its `elasticsearch.yml` file. +<3> A comma-separated list of DNS names for the new node. +<4> A comma-separated list of IP addresses for the new node. + +TIP: The {ref}/certutil.html[elasticsearch-certutil] command has a lot more +options. For example, it can generate Privacy Enhanced Mail (PEM) formatted +certificates and keys. It can also generate certificate signing requests (CSRs) +that you can use to obtain signed certificates from a commercial or +organization-specific certificate authority. However, those options are not +covered in this tutorial. \ No newline at end of file diff --git a/x-pack/docs/en/security/securing-communications/tutorial-tls-internode.asciidoc b/x-pack/docs/en/security/securing-communications/tutorial-tls-internode.asciidoc new file mode 100644 index 0000000000000..3e91bf834b99c --- /dev/null +++ b/x-pack/docs/en/security/securing-communications/tutorial-tls-internode.asciidoc @@ -0,0 +1,109 @@ +[role="xpack"] +[[encrypting-internode]] +=== Encrypt internode communications + +Now that you've generated a certificate authority and certificates for each node, +you must update your cluster to use these files. + +. Stop each {es} node. For example, if you installed {es} from an archive +distribution, enter `Ctrl-C` on the command line. See +{ref}/stopping-elasticsearch.html[Stopping {es}]. + +. On each node, enable Transport Layer Security (TLS/SSL) for transport +(internode) communications. You must also configure each node to identify itself +using its signed certificate. ++ +-- +For example, add the following settings in each `ES_PATH_CONF/elasticsearch.yml` +file: + +[source,yaml] +---- +xpack.security.transport.ssl.enabled: true +xpack.security.transport.ssl.keystore.path: certs/${node.name}.p12 <1> +xpack.security.transport.ssl.truststore.path: certs/${node.name}.p12 +---- +<1> If the file name for your certificate does not match the `node.name` value, +you must put the appropriate file name in each `elasticsearch.yml` file. + +NOTE: The PKCS#12 keystore that is output by the `elasticsearch-certutil` can be +used as both a keystore and a truststore. If you use other tools to manage and +generate your certificates, you might have different values for these settings, +but that scenario is not covered in this tutorial. + +For more information about these settings, see +{ref}/security-settings.html#transport-tls-ssl-settings[Transport TLS settings]. +-- + +. On each node, store the password for PKCS#12 file in the {es} keystore. ++ +-- +For example, run the following commands on each node: + +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/elasticsearch-keystore create <1> +./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password +./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password +---------------------------------------------------------------------- +<1> If the {es} keystore already exists, this command asks whether you want to +overwrite it. You do not need to overwrite it; you can simply add settings to +your existing {es} keystore. + +You are prompted to supply the password value. As you saw in the previous step, +we are using the same file for both the transport TLS keystore and truststore, +therefore you supply the same password for both of these settings. +-- + +. Start each {es} node. For example, if you installed {es} with a `.tar.gz` +package, run the following command from each {es} directory: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/elasticsearch +---------------------------------------------------------------------- + +See {ref}/starting-elasticsearch.html[Starting {es}]. +-- + +. (Optional) Restart {kib}. For example, if you installed +{kib} with a `.tar.gz` package, run the following command from the {kib} +directory: ++ +-- +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +./bin/kibana +---------------------------------------------------------------------- + +See {kibana-ref}/start-stop.html[Starting and stopping {kib}]. +-- + +. Verify that your cluster is healthy. For example, use the +{ref}/cluster-health.html[cluster health API]: ++ +-- +[source,js] +---------------------------------- +GET _cluster/health +---------------------------------- +// CONSOLE + +Confirm the `status` of your cluster is `green` in the response from this API. + +If you encounter errors, you can see some common problems and solutions in +<>. +-- + +[float] +[[encrypting-internode-nextsteps]] +=== What's next? + +Congratulations! You've encrypted communications between the nodes in your +cluster and can pass the +{ref}/bootstrap-checks-xpack.html#bootstrap-checks-tls[TLS bootstrap check]. + +If you want to encrypt communications between other products in the {stack}, see +<>. + diff --git a/x-pack/docs/en/security/securing-communications/tutorial-tls-intro.asciidoc b/x-pack/docs/en/security/securing-communications/tutorial-tls-intro.asciidoc new file mode 100644 index 0000000000000..9975dbbd1bd70 --- /dev/null +++ b/x-pack/docs/en/security/securing-communications/tutorial-tls-intro.asciidoc @@ -0,0 +1,56 @@ +[role="xpack"] +[[encrypting-internode-communications]] +== Tutorial: Encrypting communications + +When you enable {es} {security-features}, unless you have a trial license, you +must use Transport Layer Security (TLS) to encrypt internode communication. In +this tutorial, you learn how to meet the minimum requirements to pass the +{ref}/bootstrap-checks-xpack.html#bootstrap-checks-tls[TLS bootstrap check]. + +NOTE: Single-node clusters that use a loopback interface do not have this +requirement. + +[float] +[[encrypting-internode-prerequisites]] +=== Before you begin + +Ideally, you should do this tutorial only after you complete the +{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}] and +<> tutorials. At a +minimum, you must: + +. Install and configure {es} and {kib} in a cluster with a single {es} node, as +described in +{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}]. In +particular, this tutorial provides instructions that work with the `zip` and +`tar.gz` packages. + +. Verify that you are using a license that includes the encrypted communications +{security-features}. To view your license in {kib}, go to *Management* and click +*License Management*. ++ +-- +By default, when you install {stack} products, they apply basic licenses with no +expiration dates. To complete this tutorial, you must have a basic or trial +license at a minimum. For more information, see {subscriptions} and +<>. +-- + +. <>. + +. <>. + +. <>. + +. Stop {kib}. The method for starting and stopping {kib} varies depending on +how you installed it. For example, if you installed {kib} from an archive +distribution (`.tar.gz` or `.zip`), stop it by entering `Ctrl-C` on the command +line. See {kibana-ref}/start-stop.html[Starting and stopping {kib}]. + +. Stop {es}. For example, if you installed {es} from an archive distribution, +enter `Ctrl-C` on the command line. See +{ref}/stopping-elasticsearch.html[Stopping {es}]. + +include::tutorial-tls-addnodes.asciidoc[] +include::tutorial-tls-certificates.asciidoc[] +include::tutorial-tls-internode.asciidoc[] \ No newline at end of file diff --git a/x-pack/docs/en/security/tribe-clients-integrations/cross-cluster-kibana.asciidoc b/x-pack/docs/en/security/tribe-clients-integrations/cross-cluster-kibana.asciidoc new file mode 100644 index 0000000000000..95e5d188f0084 --- /dev/null +++ b/x-pack/docs/en/security/tribe-clients-integrations/cross-cluster-kibana.asciidoc @@ -0,0 +1,39 @@ +[[cross-cluster-kibana]] +==== {ccs-cap} and {kib} + +When {kib} is used to search across multiple clusters, a two-step authorization +process determines whether or not the user can access indices on a remote +cluster: + +* First, the local cluster determines if the user is authorized to access remote +clusters. (The local cluster is the cluster {kib} is connected to.) +* If they are, the remote cluster then determines if the user has access +to the specified indices. + +To grant {kib} users access to remote clusters, assign them a local role +with read privileges to indices on the remote clusters. You specify remote +cluster indices as `:`. + +To enable users to actually read the remote indices, you must create a matching +role on the remote clusters that grants the `read_cross_cluster` privilege +and access to the appropriate indices. + +For example, if {kib} is connected to the cluster where you're actively +indexing {ls} data (your _local cluster_) and you're periodically +offloading older time-based indices to an archive cluster +(your _remote cluster_) and you want to enable {kib} users to search both +clusters: + +. On the local cluster, create a `logstash_reader` role that grants +`read` and `view_index_metadata` privileges on the local `logstash-*` indices. ++ +NOTE: If you configure the local cluster as another remote in {es}, the +`logstash_reader` role on your local cluster also needs to grant the +`read_cross_cluster` privilege. + +. Assign your {kib} users the `kibana_user` role and your `logstash_reader` +role. + +. On the remote cluster, create a `logstash_reader` role that grants the +`read_cross_cluster` privilege and `read` and `view_index_metadata` privileges +for the `logstash-*` indices. diff --git a/x-pack/docs/en/security/tribe-clients-integrations/cross-cluster.asciidoc b/x-pack/docs/en/security/tribe-clients-integrations/cross-cluster.asciidoc index e2ace6c121277..78edd5f030275 100644 --- a/x-pack/docs/en/security/tribe-clients-integrations/cross-cluster.asciidoc +++ b/x-pack/docs/en/security/tribe-clients-integrations/cross-cluster.asciidoc @@ -156,4 +156,4 @@ GET two:logs-2017.04/_search <1> // TEST[skip:todo] //TBD: Is there a missing description of the <1> callout above? -include::{kib-repo-dir}/user/security/cross-cluster-kibana.asciidoc[] +include::cross-cluster-kibana.asciidoc[] diff --git a/x-pack/docs/en/security/troubleshooting.asciidoc b/x-pack/docs/en/security/troubleshooting.asciidoc new file mode 100644 index 0000000000000..6acde6db37bfc --- /dev/null +++ b/x-pack/docs/en/security/troubleshooting.asciidoc @@ -0,0 +1,810 @@ +[role="xpack"] +[[security-troubleshooting]] +== Troubleshooting security +++++ +Security +++++ + +Use the information in this section to troubleshoot common problems and find +answers for frequently asked questions. + +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> + + +To get help, see <>. + +[[security-auth-failure-upgrade]] +=== Can't log in after upgrading to {version} + +*Symptoms:* + +* Native realm credentials don't work after $version upgrade or snapshot restore. +* Error message: "Security index is not on the current version - the native +realm will not be operational until the upgrade API is run on the security index" + +*Resolution:* + +You must upgrade the `.security` index to the 6.x format. For +instructions, see +{stack-ref}/upgrading-elastic-stack.html#upgrade-internal-indices[Upgrading +internal indices]. + +There are two situations where it's necessary to manually upgrade the +`.security` index: + +* After upgrading directly to {version} from 5.5 or earlier. +* After restoring a snapshot from 5.5 or earlier that contains a `.security` +index in the old format to a 6.0 cluster. + +[[security-trb-settings]] +=== Some settings are not returned via the nodes settings API + +*Symptoms:* + +* When you use the {ref}/cluster-nodes-info.html[nodes info API] to retrieve +settings for a node, some information is missing. + +*Resolution:* + +This is intentional. Some of the settings are considered to be highly +sensitive: all `ssl` settings, ldap `bind_dn`, and `bind_password`. +For this reason, we filter these settings and do not expose them via +the nodes info API rest endpoint. You can also define additional +sensitive settings that should be hidden using the +`xpack.security.hide_settings` setting. For example, this snippet +hides the `url` settings of the `ldap1` realm and all settings of the +`ad1` realm. + +[source, yaml] +------------------------------------------ +xpack.security.hide_settings: xpack.security.authc.realms.ldap1.url, +xpack.security.authc.realms.ad1.* +------------------------------------------ + +[[security-trb-roles]] +=== Authorization exceptions + +*Symptoms:* + +* I configured the appropriate roles and the users, but I still get an +authorization exception. +* I can authenticate to LDAP, but I still get an authorization exception. + + +*Resolution:* + +. Verify that the role names associated with the users match the roles defined +in the `roles.yml` file. You can use the `elasticsearch-users` tool to list all +the users. Any unknown roles are marked with `*`. ++ +-- +[source, shell] +------------------------------------------ +bin/elasticsearch-users list +rdeniro : admin +alpacino : power_user +jacknich : monitoring,unknown_role* <1> +------------------------------------------ +<1> `unknown_role` was not found in `roles.yml` + +For more information about this command, see the +{ref}/users-command.html[`elasticsearch-users` command]. +-- + +. If you are authenticating to LDAP, a number of configuration options can cause +this error. ++ +-- +|====================== +|_group identification_ | + +Groups are located by either an LDAP search or by the "memberOf" attribute on +the user. Also, If subtree search is turned off, it will search only one +level deep. See the <> for all the options. +There are many options here and sticking to the defaults will not work for all +scenarios. + +| _group to role mapping_| + +Either the `role_mapping.yml` file or the location for this file could be +misconfigured. For more information, see {ref}/security-files.html[Security files]. + +|_role definition_| + +The role definition might be missing or invalid. + +|====================== + +To help track down these possibilities, add the following lines to the end of +the `log4j2.properties` configuration file in the `ES_PATH_CONF`: + +[source,properties] +---------------- +logger.authc.name = org.elasticsearch.xpack.security.authc +logger.authc.level = DEBUG +---------------- + +A successful authentication should produce debug statements that list groups and +role mappings. +-- + +[[security-trb-extraargs]] +=== Users command fails due to extra arguments + +*Symptoms:* + +* The `elasticsearch-users` command fails with the following message: +`ERROR: extra arguments [...] were provided`. + +*Resolution:* + +This error occurs when the `elasticsearch-users` tool is parsing the input and +finds unexpected arguments. This can happen when there are special characters +used in some of the arguments. For example, on Windows systems the `,` character +is considered a parameter separator; in other words `-r role1,role2` is +translated to `-r role1 role2` and the `elasticsearch-users` tool only +recognizes `role1` as an expected parameter. The solution here is to quote the +parameter: `-r "role1,role2"`. + +For more information about this command, see +{ref}/users-command.html[`elasticsearch-users` command]. + +[[trouble-shoot-active-directory]] +=== Users are frequently locked out of Active Directory + +*Symptoms:* + +* Certain users are being frequently locked out of Active Directory. + +*Resolution:* + +Check your realm configuration; realms are checked serially, one after another. +If your Active Directory realm is being checked before other realms and there +are usernames that appear in both Active Directory and another realm, a valid +login for one realm might be causing failed login attempts in another realm. + +For example, if `UserA` exists in both Active Directory and a file realm, and +the Active Directory realm is checked first and file is checked second, an +attempt to authenticate as `UserA` in the file realm would first attempt to +authenticate against Active Directory and fail, before successfully +authenticating against the `file` realm. Because authentication is verified on +each request, the Active Directory realm would be checked - and fail - on each +request for `UserA` in the `file` realm. In this case, while the authentication +request completed successfully, the account on Active Directory would have +received several failed login attempts, and that account might become +temporarily locked out. Plan the order of your realms accordingly. + +Also note that it is not typically necessary to define multiple Active Directory +realms to handle domain controller failures. When using Microsoft DNS, the DNS +entry for the domain should always point to an available domain controller. + + +[[trb-security-maccurl]] +=== Certificate verification fails for curl on Mac + +*Symptoms:* + +* `curl` on the Mac returns a certificate verification error even when the +`--cacert` option is used. + + +*Resolution:* + +Apple's integration of `curl` with their keychain technology disables the +`--cacert` option. +See http://curl.haxx.se/mail/archive-2013-10/0036.html for more information. + +You can use another tool, such as `wget`, to test certificates. Alternately, you +can add the certificate for the signing certificate authority MacOS system +keychain, using a procedure similar to the one detailed at the +http://support.apple.com/kb/PH14003[Apple knowledge base]. Be sure to add the +signing CA's certificate and not the server's certificate. + + +[[trb-security-sslhandshake]] +=== SSLHandshakeException causes connections to fail + +*Symptoms:* + +* A `SSLHandshakeException` causes a connection to a node to fail and indicates +that there is a configuration issue. Some of the common exceptions are shown +below with tips on how to resolve these issues. + + +*Resolution:* + +`java.security.cert.CertificateException: No name matching node01.example.com found`:: ++ +-- +Indicates that a client connection was made to `node01.example.com` but the +certificate returned did not contain the name `node01.example.com`. In most +cases, the issue can be resolved by ensuring the name is specified during +certificate creation. For more information, see <>. Another scenario is +when the environment does not wish to use DNS names in certificates at all. In +this scenario, all settings in `elasticsearch.yml` should only use IP addresses +including the `network.publish_host` setting. +-- + +`java.security.cert.CertificateException: No subject alternative names present`:: ++ +-- +Indicates that a client connection was made to an IP address but the returned +certificate did not contain any `SubjectAlternativeName` entries. IP addresses +are only used for hostname verification if they are specified as a +`SubjectAlternativeName` during certificate creation. If the intent was to use +IP addresses for hostname verification, then the certificate will need to be +regenerated with the appropriate IP address. See <>. +-- + +`javax.net.ssl.SSLHandshakeException: null cert chain` and `javax.net.ssl.SSLException: Received fatal alert: bad_certificate`:: ++ +-- +The `SSLHandshakeException` indicates that a self-signed certificate was +returned by the client that is not trusted as it cannot be found in the +`truststore` or `keystore`. This `SSLException` is seen on the client side of +the connection. +-- + +`sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target` and `javax.net.ssl.SSLException: Received fatal alert: certificate_unknown`:: ++ +-- +This `SunCertPathBuilderException` indicates that a certificate was returned +during the handshake that is not trusted. This message is seen on the client +side of the connection. The `SSLException` is seen on the server side of the +connection. The CA certificate that signed the returned certificate was not +found in the `keystore` or `truststore` and needs to be added to trust this +certificate. +-- + +[[trb-security-ssl]] +=== Common SSL/TLS exceptions + +*Symptoms:* + +* You might see some exceptions related to SSL/TLS in your logs. Some of the +common exceptions are shown below with tips on how to resolve these issues. + + + + +*Resolution:* + +`WARN: received plaintext http traffic on a https channel, closing connection`:: ++ +-- +Indicates that there was an incoming plaintext http request. This typically +occurs when an external applications attempts to make an unencrypted call to the +REST interface. Please ensure that all applications are using `https` when +calling the REST interface with SSL enabled. +-- + +`org.elasticsearch.common.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:`:: ++ +-- +Indicates that there was incoming plaintext traffic on an SSL connection. This +typically occurs when a node is not configured to use encrypted communication +and tries to connect to nodes that are using encrypted communication. Please +verify that all nodes are using the same setting for +`xpack.security.transport.ssl.enabled`. + +For more information about this setting, see +{ref}/security-settings.html[Security Settings in {es}]. +-- + +`java.io.StreamCorruptedException: invalid internal transport message format, got`:: ++ +-- +Indicates an issue with data received on the transport interface in an unknown +format. This can happen when a node with encrypted communication enabled +connects to a node that has encrypted communication disabled. Please verify that +all nodes are using the same setting for `xpack.security.transport.ssl.enabled`. + +For more information about this setting, see +{ref}/security-settings.html[Security Settings in {es}]. +-- + +`java.lang.IllegalArgumentException: empty text`:: ++ +-- +This exception is typically seen when a `https` request is made to a node that +is not using `https`. If `https` is desired, please ensure the following setting +is in `elasticsearch.yml`: + +[source,yaml] +---------------- +xpack.security.http.ssl.enabled: true +---------------- + +For more information about this setting, see +{ref}/security-settings.html[Security Settings in {es}]. +-- + +`ERROR: unsupported ciphers [...] were requested but cannot be used in this JVM`:: ++ +-- +This error occurs when a SSL/TLS cipher suite is specified that cannot supported +by the JVM that {es} is running in. Security tries to use the specified cipher +suites that are supported by this JVM. This error can occur when using the +Security defaults as some distributions of OpenJDK do not enable the PKCS11 +provider by default. In this case, we recommend consulting your JVM +documentation for details on how to enable the PKCS11 provider. + +Another common source of this error is requesting cipher suites that use +encrypting with a key length greater than 128 bits when running on an Oracle JDK. +In this case, you must install the +<>. +-- + +[[trb-security-kerberos]] +=== Common Kerberos exceptions + +*Symptoms:* + +* User authentication fails due to either GSS negotiation failure +or a service login failure (either on the server or in the {es} http client). +Some of the common exceptions are listed below with some tips to help resolve +them. + +*Resolution:* + +`Failure unspecified at GSS-API level (Mechanism level: Checksum failed)`:: ++ +-- + +When you see this error message on the HTTP client side, then it may be +related to an incorrect password. + +When you see this error message in the {es} server logs, then it may be +related to the {es} service keytab. The keytab file is present but it failed +to log in as the user. Please check the keytab expiry. Also check whether the +keytab contain up-to-date credentials; if not, replace them. + +You can use tools like `klist` or `ktab` to list principals inside +the keytab and validate them. You can use `kinit` to see if you can acquire +initial tickets using the keytab. Please check the tools and their documentation +in your Kerberos environment. + +Kerberos depends on proper hostname resolution, so please check your DNS infrastructure. +Incorrect DNS setup, DNS SRV records or configuration for KDC servers in `krb5.conf` +can cause problems with hostname resolution. + +-- + +`Failure unspecified at GSS-API level (Mechanism level: Request is a replay (34))`:: + +`Failure unspecified at GSS-API level (Mechanism level: Clock skew too great (37))`:: ++ +-- + +To prevent replay attacks, Kerberos V5 sets a maximum tolerance for computer +clock synchronization and it is typically 5 minutes. Please check whether +the time on the machines within the domain is in sync. + +-- + +As Kerberos logs are often cryptic in nature and many things can go wrong +as it depends on external services like DNS and NTP. You might +have to enable additional debug logs to determine the root cause of the issue. + +{es} uses a JAAS (Java Authentication and Authorization Service) Kerberos login +module to provide Kerberos support. To enable debug logs on {es} for the login +module use following Kerberos realm setting: +[source,yaml] +---------------- +xpack.security.authc.realms..krb.debug: true +---------------- + +For detailed information, see {ref}/security-settings.html#ref-kerberos-settings[Kerberos realm settings]. + +Sometimes you may need to go deeper to understand the problem during SPNEGO +GSS context negotiation or look at the Kerberos message exchange. To enable +Kerberos/SPNEGO debug logging on JVM, add following JVM system properties: + +`-Dsun.security.krb5.debug=true` + +`-Dsun.security.spnego.debug=true` + +For more information about JVM system properties, see {ref}/jvm-options.html[configuring JVM options]. + +[[trb-security-saml]] +=== Common SAML issues + +Some of the common SAML problems are shown below with tips on how to resolve +these issues. + +. *Symptoms:* ++ +-- +Authentication in {kib} fails and the following error is printed in the {es} +logs: + +.... +Cannot find any matching realm for [SamlPrepareAuthenticationRequest{realmName=null, +assertionConsumerServiceURL=https://my.kibana.url/api/security/v1/saml}] +.... + +*Resolution:* + +{es}, {kib} and your Identity Provider need all have the same view on what the +Assertion Consumer Service URL of the SAML Service Provider is. + +.. {es} discovers this via the `sp.acs` setting in your {es} SAML realm configuration +.. {kib} constructs this value using the `server.host` and `server.port` in +`kibana.yml`. For instance: ++ +[source, shell] +----------------------------------------------- +server.host: kibanaserver.org +server.port: 3456 +----------------------------------------------- ++ +These settings would mean that {kib} would construct the Assertion Consumer +Service URL as `https://kibanaserver.org:3456/api/security/v1/saml`. However, +if for example, {kib} is behind a reverse proxy and you have configured the +following `xpack.security.public.*` settings: ++ +[source, shell] +----------------------------------------------- +xpack.security.public: + protocol: https + hostname: kibana.proxy.com + port: 8080 +----------------------------------------------- ++ +These settings would instruct {kib} to construct the Assertion Consumer Service +URL as `https://kibana.proxy.com:8080/api/security/v1/saml` + +.. The SAML Identity Provider is either explicitly configured by the IdP +administrator or consumes the SAML metadata that are generated by {es} and as +such contain the same value for the +as the one +that is configured in the the `sp.acs` setting in the {es} SAML realm +configuration. +-- ++ +The error encountered here indicates that the Assertion Consumer Service URL +that {kib} has constructed via one of the aforementioned ways +(`https://my.kibana.url/api/security/v1/saml`) is not the one that {es} is +configured with. Note that these two URLs are compared as case-sensitive strings +and not as canonicalized URLs. ++ +Often, this can be resolved by changing the `sp.acs` URL in `elasticearch.yml` +to match the value that {kib} has constructed. Note however, that the SAML IdP +configuration needs to also be adjusted to reflect this change. ++ +Alternatively, if you think {kib} is using the wrong value for the Assertion +Consumer Service URL, you will need to change the configuration in `kibana.yml` +by adjusting either the `server.host` and `server.port` to change the URL {kib} +listens to or the `xpack.security.public.*` settings to make {kib} aware about +its correct public URL. + +. *Symptoms:* ++ +-- +Authentication in {kib} fails and the following error is printed in the +{es} logs: + +.... +Authentication to realm saml1 failed - Provided SAML response is not valid for realm +saml/saml1 (Caused by ElasticsearchSecurityException[Conditions [https://some-url-here...] +do not match required audience [https://my.kibana.url]]) +.... + +*Resolution:* + +We received a SAML response that is addressed to another SAML Service Provider. +This usually means that the configured SAML Service Provider Entity ID in +`elasticsearch.yml` (`sp.entity_id`) does not match what has been configured as +the SAML Service Provider Entity ID in the SAML Identity Provider documentation. + +To resolve this issue, ensure that both the saml realm in {es} and the IdP are +configured with the same string for the SAML Entity ID of the Service Provider. + +TIP: These strings are compared as case-sensitive strings and not as +canonicalized URLs even when the values are URL-like. Be mindful of trailing +slashes, port numbers, etc. + +-- + +. *Symptoms:* ++ +-- +Authentication in {kib} fails and the following error is printed in the +{es} logs: + +.... +Cannot find metadata for entity [your:entity.id] in [metadata.xml] +.... + +*Resolution:* + +We could not find the metadata for the SAML Entity ID `your:entity.id` in the +configured metadata file (`metadata.xml`). + +.. Ensure that the `metadata.xml` file you are using is indeed the one provided +by your SAML Identity Provider. +.. Ensure that the `metadata.xml` file contains one element +as follows: `] +for action [cluster:admin/xpack/security/saml/authenticate] +.... + +*Resolution:* + +This error indicates that {es} failed to process the incoming SAML +authentication message. Since the message can't be processed, {es} is not aware +of who the to-be authenticated user is and the `` +placeholder is used instead. To diagnose the _actual_ problem, you must check +the {es} logs for further details. +-- + +. *Symptoms:* ++ +-- +Authentication in {kib} fails and the following error is printed in the +{es} logs: + +.... +Authentication to realm my-saml-realm failed - +Provided SAML response is not valid for realm saml/my-saml-realm +(Caused by ElasticsearchSecurityException[SAML Response is not a 'success' response: + The SAML IdP did not grant the request. It indicated that the Elastic Stack side sent + something invalid (urn:oasis:names:tc:SAML:2.0:status:Requester). Specific status code which might + indicate what the issue is: [urn:oasis:names:tc:SAML:2.0:status:InvalidNameIDPolicy]] +) +.... + +*Resolution:* + +This means that the SAML Identity Provider failed to authenticate the user and +sent a SAML Response to the Service Provider ({stack}) indicating this failure. +The message will convey whether the SAML Identity Provider thinks that the problem +is with the Service Provider ({stack}) or with the Identity Provider itself and +the specific status code that follows is extremely useful as it usually indicates +the underlying issue. The list of specific error codes is defined in the +https://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf[SAML 2.0 Core specification - Section 3.2.2.2] +and the most commonly encountered ones are: + +. `urn:oasis:names:tc:SAML:2.0:status:AuthnFailed`: The SAML Identity Provider failed to + authenticate the user. There is not much to troubleshoot on the {stack} side for this status, the logs of + the SAML Identity Provider will hopefully offer much more information. +. `urn:oasis:names:tc:SAML:2.0:status:InvalidNameIDPolicy`: The SAML Identity Provider cannot support + releasing a NameID with the requested format. When creating SAML Authentication Requests, {es} sets + the NameIDPolicy element of the Authentication request with the appropriate value. This is controlled + by the {ref}/security-settings.html#ref-saml-settings[`nameid_format`] configuration parameter in + `elasticsearch.yml`, which if not set defaults to `urn:oasis:names:tc:SAML:2.0:nameid-format:transient`. + This instructs the Identity Provider to return a NameID with that specific format in the SAML Response. If + the SAML Identity Provider cannot grant that request, for example because it is configured to release a + NameID format with `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` format instead, it returns this error + indicating an invalid NameID policy. This issue can be resolved by adjusting `nameid_format` to match the format + the SAML Identity Provider can return or by setting it to `urn:oasis:names:tc:SAML:2.0:nameid-format:unspecified` + so that the Identity Provider is allowed to return any format it wants. +-- + +. *Symptoms:* ++ +-- +Authentication in {kib} fails and the following error is printed in the +{es} logs: + +.... +The XML Signature of this SAML message cannot be validated. Please verify that the saml +realm uses the correct SAMLmetadata file/URL for this Identity Provider +.... + +*Resolution:* + +This means that {es} failed to validate the digital signature of the SAML +message that the Identity Provider sent. {es} uses the public key of the +Identity Provider that is included in the SAML metadata, in order to validate +the signature that the IdP has created using its corresponding private key. +Failure to do so, can have a number of causes: + +.. As the error message indicates, the most common cause is that the wrong +metadata file is used and as such the public key it contains doesn't correspond +to the private key the Identity Provider uses. +.. The configuration of the Identity Provider has changed or the key has been +rotated and the metadata file that {es} is using has not been updated. +.. The SAML Response has been altered in transit and the signature cannot be +validated even though the correct key is used. + +NOTE: The private keys and public keys and self-signed X.509 certificates that +are used in SAML for digital signatures as described above have no relation to +the keys and certificates that are used for TLS either on the transport or the +http layer. A failure such as the one described above has nothing to do with +your `xpack.ssl` related configuration. + +-- + +. *Symptoms:* ++ +-- +Users are unable to login with a local username and password in {kib} because +SAML is enabled. + +*Resolution:* + +If you want your users to be able to use local credentials to authenticate to +{kib} in addition to using the SAML realm for Single Sign-On, you must enable +the `basic` `authProvider` in {kib}. The process is documented in the +<> +-- + +*Logging:* + +Very detailed trace logging can be enabled specifically for the SAML realm by +setting the following transient setting: + +[source, shell] +----------------------------------------------- +PUT /_cluster/settings +{ + "transient": { + "logger.org.elasticsearch.xpack.security.authc.saml": "trace" + } +} +----------------------------------------------- + + +Alternatively, you can add the following lines to the end of the +`log4j2.properties` configuration file in the `ES_PATH_CONF`: + +[source,properties] +---------------- +logger.saml.name = org.elasticsearch.xpack.security.authc.saml +logger.saml.level = TRACE +---------------- + +[[trb-security-internalserver]] +=== Internal Server Error in Kibana + +*Symptoms:* + +* In 5.1.1, an `UnhandledPromiseRejectionWarning` occurs and {kib} displays an +Internal Server Error. +//TBD: Is the same true for later releases? + +*Resolution:* + +If the Security plugin is enabled in {es} but disabled in {kib}, you must +still set `elasticsearch.username` and `elasticsearch.password` in `kibana.yml`. +Otherwise, {kib} cannot connect to {es}. + + +[[trb-security-setup]] +=== Setup-passwords command fails due to connection failure + +The {ref}/setup-passwords.html[elasticsearch-setup-passwords command] sets +passwords for the built-in users by sending user management API requests. If +your cluster uses SSL/TLS for the HTTP (REST) interface, the command attempts to +establish a connection with the HTTPS protocol. If the connection attempt fails, +the command fails. + +*Symptoms:* + +. {es} is running HTTPS, but the command fails to detect it and returns the +following errors: ++ +-- +[source, shell] +------------------------------------------ +Cannot connect to elasticsearch node. +java.net.SocketException: Unexpected end of file from server +... +ERROR: Failed to connect to elasticsearch at +http://127.0.0.1:9200/_xpack/security/_authenticate?pretty. +Is the URL correct and elasticsearch running? +------------------------------------------ +-- + +. SSL/TLS is configured, but trust cannot be established. The command returns +the following errors: ++ +-- +[source, shell] +------------------------------------------ +SSL connection to +https://127.0.0.1:9200/_xpack/security/_authenticate?pretty +failed: sun.security.validator.ValidatorException: +PKIX path building failed: +sun.security.provider.certpath.SunCertPathBuilderException: +unable to find valid certification path to requested target +Please check the elasticsearch SSL settings under +xpack.security.http.ssl. +... +ERROR: Failed to establish SSL connection to elasticsearch at +https://127.0.0.1:9200/_xpack/security/_authenticate?pretty. +------------------------------------------ +-- + +. The command fails because hostname verification fails, which results in the +following errors: ++ +-- +[source, shell] +------------------------------------------ +SSL connection to +https://idp.localhost.test:9200/_xpack/security/_authenticate?pretty +failed: java.security.cert.CertificateException: +No subject alternative DNS name matching +elasticsearch.example.com found. +Please check the elasticsearch SSL settings under +xpack.security.http.ssl. +... +ERROR: Failed to establish SSL connection to elasticsearch at +https://elasticsearch.example.com:9200/_xpack/security/_authenticate?pretty. +------------------------------------------ +-- + +*Resolution:* + +. If your cluster uses TLS/SSL for the HTTP interface but the +`elasticsearch-setup-passwords` command attempts to establish a non-secure +connection, use the `--url` command option to explicitly specify an HTTPS URL. +Alternatively, set the `xpack.security.http.ssl.enabled` setting to `true`. + +. If the command does not trust the {es} server, verify that you configured the +`xpack.security.http.ssl.certificate_authorities` setting or the +`xpack.security.http.ssl.truststore.path` setting. + +. If hostname verification fails, you can disable this verification by setting +`xpack.security.http.ssl.verification_mode` to `certificate`. + +For more information about these settings, see +{ref}/security-settings.html[Security Settings in {es}]. + +[[trb-security-path]] +=== Failures due to relocation of the configuration files + +*Symptoms:* + +* Active Directory or LDAP realms might stop working after upgrading to {es} 6.3 +or later releases. In 6.4 or later releases, you might see messages in the {es} +log that indicate a config file is in a deprecated location. + +*Resolution:* + +By default, in 6.2 and earlier releases, the security configuration files are +located in the `ES_PATH_CONF/x-pack` directory, where `ES_PATH_CONF` is an +environment variable that defines the location of the +{ref}/settings.html#config-files-location[config directory]. + +In 6.3 and later releases, the config directory no longer contains an `x-pack` +directory. The files that were stored in this folder, such as the +`log4j2.properties`, `role_mapping.yml`, `roles.yml`, `users`, and `users_roles` +files, now exist directly in the config directory. + +IMPORTANT: If you upgraded to 6.3 or later releases, your old security +configuration files still exist in an `x-pack` folder. That file path is +deprecated, however, and you should move your files out of that folder. + +In 6.3 and later releases, settings such as `files.role_mapping` default to +`ES_PATH_CONF/role_mapping.yml`. If you do not want to use the default locations, +you must update the settings appropriately. See +{ref}/security-settings.html[Security settings in {es}]. +