【ELK】window下ELK的安装与部署

ELK的安装与部署

  • 1. 下载
  • 2. 配置&启动
    • 2.1 elasticsarch
      • 2.1.1 生成证书
      • 2.1.2 生成秘钥
      • 2.1.3 将凭证迁移到指定目录
      • 2.1.4 改配置
      • 2.1.5 启动
      • 2.1.6 访问测试
      • 2.1.7 生成kibana账号
    • 2.2 kibana
      • 2.2.1 改配置
      • 2.2.2 启动
      • 2.2.3 访问测试
    • 2.3 logstash
      • 2.3.1 改配置
      • 2.3.2 启动
    • 2.4 filebeat
      • 2.4.1 改配置
      • 2.4.2 启动

1. 下载

官网地址:https://www.elastic.co/

下载地址:Download Elasticsearch | Elastic

其他版本或产品: Past Releases of Elastic Stack Software | Elastic

IK地址:https://github.com/infinilabs/analysis-ik/releases

需下载内容:

在这里插入图片描述

所有产品版本需保持一致

2. 配置&启动

2.1 elasticsarch

2.1.1 生成证书

切换到bin目录下,执行

elasticsearch-certutil.bat ca

碰到第一个直接回车 文件将会生成在当前文件夹下
碰到第二个输入密码,例如123456
在这里插入图片描述
完成后会生成一个文件:elastic-stack-ca.p12

在这里插入图片描述

2.1.2 生成秘钥

elasticsearch-certutil.bat cert --ca ./elastic-stack-ca.p12

第一个输入密码后回车
第二个不需要输入直接回车
第三个确认密码后回车,之后生成一个文件:elastic-certificates.p12

在这里插入图片描述

2.1.3 将凭证迁移到指定目录

将生成的elastic-stack-ca.p12elastic-certificates.p12放置在E:\software\elasticsearch-8.9.2\config\certificates目录下

在这里插入图片描述

2.1.4 改配置

修改config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-elatics
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: E:\ELK\elasticsearch-8.9.2\data
#
# Path to log files:
#
path.logs: E:\ELK\elasticsearch-8.9.2\logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 22-04-2024 01:19:26
#
# --------------------------------------------------------------------------------# Enable security featuresxpack.security.enabled: truexpack.security.enrollment.enabled: true#xpack.security.http.ssl:
#  enabled: true
#  keystore.path: certs/http.p12xpack.security.transport.ssl:enabled: trueverification_mode: certificatekeystore.path: E:\\software\\elasticsearch-8.9.2\\config\\certificates\\elastic-certificates.p12keystore.password: 123456truststore.path: E:\\software\\elasticsearch-8.9.2\\config\\certificates\\elastic-certificates.p12truststore.password: 123456

2.1.5 启动

在这里插入图片描述
双击执行即可,初次执行会返回密码及token等信息 需记录下来 内容如下:

在这里插入图片描述

2.1.6 访问测试

访问:http://localhost:9200/ 返回以下信息 ,提示需输入账号密码,账号为 elastic 密码在启动时输出信息中可见,确认后展示以下信息:

在这里插入图片描述

成功!!

2.1.7 生成kibana账号

elastic账号是无法用于kibana的登陆的,所以需要自行创建账号,并授权,cmd定位到es运行时(bin)目录输入以下命令

elasticsearch-users useradd 用户名

接着会提示输入密码,键入密码即可.这里用户创建完成

角色授权操作

elasticsearch-users roles -a superuser 用户名
elasticsearch-users roles -a kibana_system 用户名

superuser能正常打开es的9200端口,kibana_system配置后才可以正常对接kb和es

查看授权

elasticsearch-users roles -v 用户名

在这里插入图片描述
授权成功.

2.2 kibana

2.2.1 改配置

修改config/kibana.yml

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: true
#server.ssl.certificate: E:\ELK\kibana-8.9.2\config\certs\http_ca.crt
#server.ssl.key: /path/to/your/server.key# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
#enterpriseSearch.host: 'http://localhost:3002'# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"elasticsearch.hosts: ["http://localhost:9200"]elasticsearch.username: "user-kibana"
elasticsearch.password: "123456"# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "eyJ2ZXIiOiI4LjkuMiIsImFkciI6WyIxOTIuMTY4LjMuNzQ6OTIwMCJdLCJmZ3IiOiI4MjBlYjQwZWJlZDZkMjczZjU0NzM5ZWY2Y2EyNDFkOTY3YmYzMDFlYWYyMGU5M2E4YWRkNGExMjlhNTBlMzAwIiwia2V5IjoiRzA1akE0OEI4R3Rabk1QRmNXVUU6Um45QmsxRWtTWU9pcFlxdW1haVlQQSJ9"# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
logging.root.level: info# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
i18n.locale: "zh-CN"# =================== Frequently used (Optional)===================# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

2.2.2 启动

在这里插入图片描述

双击

在这里插入图片描述

启动成功

2.2.3 访问测试

访问http://localhost:5601,登录账号和密码

在这里插入图片描述

在这里插入图片描述

2.3 logstash

2.3.1 改配置

修改config/logstash-sample文件,也可复制一份修改

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.input {beats {port => 5044}
}output {file {path => "E:\ELK\logstash-8.9.2\logstash-test.log"                        #在web1节点本地生成一份日志文件}elasticsearch {hosts => ["http://localhost:9200"]index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"user => "elastic"password => "QFpuWk-MOuwIwXsE=TM6"}
}

2.3.2 启动

在bin目录下执行

logstash.bat -f ./config/logstash-sample.conf

在这里插入图片描述

成功

2.4 filebeat

2.4.1 改配置

修改filebeat.yml

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.# ============================== Filebeat inputs ===============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.# filestream is an input for collecting log messages from files.
- type: filestream# Unique ID among all inputs, an ID is required.id: my-filestream-id# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:- E:\logs\*.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.# Line filtering happens after the parsers pipeline. If you would like to filter lines# before parsers, use include_message parser.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.# Line filtering happens after the parsers pipeline. If you would like to filter lines# before parsers, use include_message parser.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#prospector.scanner.exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1# ============================== Filebeat modules ==============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s# ======================= Elasticsearch template setting =======================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false# ================================== General ===================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:# =================================== Kibana ===================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:# =============================== Elastic Cloud ================================# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:# ================================== Outputs ===================================# Configure what output to use when sending the data collected by the beat.# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:# Array of hosts to connect to.# hosts: ["localhost:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"# ------------------------------ Logstash Output -------------------------------
output.logstash:# The Logstash hostshosts: ["localhost:5044"]enabled: true# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"# ================================= Processors =================================
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~# ================================== Logging ===================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:# ============================== Instrumentation ===============================# Instrumentation support for the filebeat.
#instrumentation:# Set to true to enable instrumentation of filebeat.#enabled: false# Environment in which filebeat is running on (eg: staging, production, etc.)#environment: ""# APM Server hosts to report instrumentation results to.#hosts:#  - http://localhost:8200# API Key for the APM Server(s).# If api_key is set then secret_token will be ignored.#api_key:# Secret token for the APM Server(s).#secret_token:# ================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

2.4.2 启动

在bin目录下执行

filebeat.exe -e -c filebeat.yml

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/1486205.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

ElMessage自动引入,样式缺失和ts esline 报错问题解决

一. 环境 "unplugin-auto-import": "^0.17.6", "vue": "^3.3.8", "vite": "^5.0.0", "typescript": "^5.2.2",二. ElMessage样式缺失问题. 以下有两种解决方法 方法一: 配置了自动引用后…

移动UI:运动风格具备什么特征,如何识别。

在移动UI设计中&#xff0c;具备以下特征可以归为运动风格&#xff1a; 1. 流畅的动画效果&#xff1a; 运动风格的UI设计通常会运用流畅的动画效果&#xff0c;例如过渡动画、元素的缓动效果等&#xff0c;以增强用户体验和吸引用户的注意力。 2. 动态的交互设计&#xff1a…

八股文之java基础

jdk9中对字符串进行了一个什么优化&#xff1f; jdk9之前 字符串的拼接通常都是使用进行拼接 但是的实现我们是基于stringbuilder进行的 这个过程通常比较低效 包含了创建stringbuilder对象 通过append方法去将stringbuilder对象进行拼接 最后使用tostring方法去转换成最终的…

python+onlyoffice+vue3项目实战20240722笔记,环境搭建和前后端基础代码

开发后端 先创建data目录,然后在data目录下创建一个test.docx测试文档。 后端代码: import json import req import api from api import middleware, PlainTextResponseasync def doc_callback(request):data = await api.req.get_json(request)print("callback ==…

PingCAP 王琦智:下一代 RAG,tidb.ai 使用知识图谱增强 RAG 能力

导读 随着 ChatGPT 的流行&#xff0c;LLMs&#xff08;大语言模型&#xff09;再次进入人们的视野。然而&#xff0c;在处理特定领域查询时&#xff0c;大模型生成的内容往往存在信息滞后和准确性不足的问题。如何让 RAG 和向量搜索技术在实际应用中更好地满足企业需求&#…

java之利用二维数组来计算年利润和每个季度的营业额

public class TwodimensionDemo2 {public static void main(String[] args) {//创建二维数组来存储数据int [][]yearArrArr{{22,66,44},{77,33,88},{25,45,65},{11,66,99}};int yearSum0;//遍历二维数组&#xff0c;得到每一个一维数组并求和for (int i 0; i < yearArrArr.…

Nginx 怎样处理请求的重试机制?

&#x1f345;关注博主&#x1f397;️ 带你畅游技术世界&#xff0c;不错过每一次成长机会&#xff01; 文章目录 Nginx 怎样处理请求的重试机制&#xff1f;一、为何需要重试机制&#xff1f;二、Nginx 中的重试机制原理三、Nginx 重试机制的配置参数四、Nginx 重试机制的实际…

新手小白的pytorch学习第十弹----多类别分类问题模型以及九、十弹的练习

目录 1 多类别分类模型1.1 创建数据1.2 创建模型1.3 模型传出的数据1.4 损失函数和优化器1.5 训练和测试1.6 衡量模型性能的指标 2 练习Exercise 之前我们已经学习了 二分类问题&#xff0c;二分类就像抛硬币正面和反面&#xff0c;只有两种情况。 这里我们要探讨一个 多类别…

LeetCode 中有关数组的题目(JAVA代码实现)

1.两数之和 作为力扣的第一题&#xff0c;我估计很多新手在这里就被劝退了&#xff0c;但其实这道题不难&#xff0c;我们用map存储我们找到的目标整数&#xff0c;当循环结束之后&#xff0c;如果找到&#xff0c;就返回找到的两个整数的数组&#xff0c;如果没找到&#xff0…

pycharm创建新python环境(切换版本)详细图解版——最简单的方法实现python环境切换

先按操作&#xff0c;进行python版本的切换 下面这种方式是切换你本地已经下载了的python环境 我比较推荐下面这种方法&#xff0c;前提是你已经安装了anaconda 在这里&#xff0c;你可以创建2.7到3.10任意版本的虚拟环境 选择了创建好的环境后&#xff0c;如果对你所需要的…

鸿蒙仓颉语言【模块module】

module 模块 模块配置文件&#xff0c;这里指项目的modules.json 文件&#xff0c;用于描述代码项目的基础元属性。 {"name": "file name", //当前项目的名称"description": "项目描述", //项目描述"version": "1.0…

【数据脱敏】⭐️SpringBoot 整合 Jackson 实现隐私数据加密

目录 &#x1f378;前言 &#x1f37b;一、Jackson 序列化库 &#x1f37a;二、方案实践 2.1 环境准备 2.2 依赖引入 2.3 代码编写 &#x1f49e;️三、接口测试 &#x1f379;四、章末 &#x1f378;前言 小伙伴们大家好&#xff0c;最近也是很忙啊&#xff0c;上次的文章…

好玩新游:辛特堡传说中文免费下载,Dungeons of Hinterberg 游戏分享

在游戏中&#xff0c;你将扮演Luisa&#xff0c;一个被现实生活拖得疲惫不堪的法律实习生。她决定暂时远离快节奏的公司生活&#xff0c;踏上征服辛特堡地下城的旅程…她会在第一天就被击退&#xff0c;还是能成为顶级猎魔人呢&#xff1f;只有一个办法可以找到答案... 体验刺激…

Go语言os包全攻略:文件、目录、环境变量与进程管理

Go语言os包全攻略&#xff1a;文件、目录、环境变量与进程管理 简介文件操作文件创建与删除文件创建文件删除 文件读写操作基本的文件读写操作使用缓冲区的文件读写 文件信息获取与修改文件路径操作获取绝对路径路径分割与合并 目录操作目录创建与删除目录创建目录删除 目录遍历…

Spring AI (三) 提示词对象Prompt

3.提示词对象Prompt 3.1.Prompt Prompt类的作用是创建结构化提示词, 实现了ModelRequest<List<Message>>接口 Prompt(String contents)&#xff1a;创建一个包含指定内容的Prompt对象。 Prompt(String contents, ChatOptions modelOptions)&#xff1a;创建一个…

AndroidStudio 编辑xml布局文件卡死问题解决

之前项目编写的都是正常&#xff0c;升级AndroidStudio后编辑布局文件就卡死&#xff0c;还以为是AndroidStudio文件。 其实不然&#xff0c;我给整个项目增加了版权声明。所以全部跟新后&#xff0c;布局文件也增加了版权声明。估计AndroidStudio在 解析布局文件时候因为有版…

【Redis】主从复制分析-基础

1 主从节点运行数据的存储 在主从复制中, 对于主节点, 从节点就是自身的一个客户端, 所以和普通的客户端一样, 会被组织为一个 client 的结构体。 typedef struct client {// 省略 } client;同时无论是从节点, 还是主节点, 在运行中的数据都存放在一个 redisServer 的结构体中…

智能停车场系统

项目名称&#xff1a;智能停车场系统 1.项目技术栈&#xff1a; 前后端分离的项目 后端&#xff1a;Springboot MybatisPlus 前端&#xff1a;Vue ElementUI 数据库&#xff1a; MySQL 2.项目功能介绍 以脚手架项目为基础完成的 1.主页&#xff1a;echarts展示的图表…

1 go语言环境的搭建

本专栏将从基础开始&#xff0c;循序渐进&#xff0c;由浅入深讲解Go语言&#xff0c;希望大家都能够从中有所收获&#xff0c;也请大家多多支持。 查看相关资料与知识库 专栏地址:Go专栏 如果文章知识点有错误的地方&#xff0c;请指正&#xff01;大家一起学习&#xff0c;…

C#中栈和堆以及修饰符

关于堆中字符串的存放 string s1"123" string s2"123" string s1"456" 此时s1输出为456 而s2仍然为123 因为在使用 String str "字符串" 的方式来创建String变量的时候&#xff0c;那么String的值便会存储在String常量池中&#x…