redis的配置文件_redis监听地址参数

redis的配置文件_redis监听地址参数位置 find / name redis.conf units单位 includes包含 network general通用 snapshotting快照 replication复制 security

redis配置文件详解

位置

find / -name redis.conf

units单位

# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
# 
# units are case insensitive so 1GB 1Gb 1gB are all the same.


配置大小单位,开头定义了一些基本的度量单位,不支持bit
对大小写不敏感

代码100分

includes包含

代码100分# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf

network

################################## NETWORK #####################################

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

设置tcp的backlog,backlog其实是一个连接队列,backlog队列总和=未完成三次握手队列+已完成三次握手队列.
在高并发环境下你需要一个高的backlog值来避免慢客户端连接问题.
注意linux内核会将这个值减小到/proc/sys/net/core/somaxconn的值,所以需要确认增大somaxconn和tcp_max_syn_backlog两个值来达到想要的效果



# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

超时时间设置,0为关闭




# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300

单位为秒,如果设置为0,则不会进行keepalive检测,建议设置成60

general通用

代码100分# 是否以守护进行
daemonize no
# 进程管道id文件,如果没有指定,则在这个路径下
pidfile /var/run/redis_6379.pid
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
# 日志级别
loglevel notice
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
# 日志名字
loglevel notice
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# 是否把日志输出到syslog中
# syslog-enabled no
# Specify the syslog identity.
# 指定syslog里的日志表示
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# 指定syslog设备,值可以是user或者local0-local7
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
# 默认有16个数据库
databases 16

snapshotting快照

################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
# flushall和shutdown会立即出发save命令,进行备份
# 禁用RDB持久化策略,只要不设置任何save指令(在redis的命令窗口中使用save命令)或者使用下面的save ""也可以(save传入一个空字符串参数也可以)
#   save ""
# 下面三个条件符合其一就触发备份
# 900秒内有一个key改变过就备份
save 900 1
# 300秒内有10个key改变过就备份
save 300 10
# 60秒内有10000个key改变就触发备份
save 60 10000


# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
# 如何配置成no,表示不在乎数据不一致或者其他的手段发现和控制
stop-writes-on-bgsave-error yes


# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
# 对于存储到磁盘中的快照,可以设置是否进行压缩存储.
# 如果是的话,redis会采用LZF算法进行压缩.
# 如果不想消耗CPU进行要锁,可以设置为关闭此功能
rdbcompression yes


# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
# 在存储快照后,还可以让redis使用CRC64算法来进行数据检验
# 这样做会增加10%的性能消耗,
# 如果想获得最大的性能提升,则可以关闭此功能
rdbchecksum yes


# The filename where to dump the DB
# 保存时的文件名称,断电重启时读取的文件名称
dbfilename dump.rdb


# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
# 工作目录
dir ./

replication复制

security安全

# 获取登录密码
config get requirepass
127.0.0.1:8686> config get requirepass
1) "requirepass"
2) "51310400"
# 查询启动时所在的目录
config get dir
127.0.0.1:8686> config get dir
1) "dir"
2) "/alidata/redis-5.0.3/db"

# 设置redis密码
config set requirepass 123456

# 登录redis
[root@izm5e2q95pbpe1hh0kkwoiz /]# redis-cli -p 8686
127.0.0.1:8686> ping
(error) NOAUTH Authentication required.
127.0.0.1:8686> auth 51310400
OK
127.0.0.1:8686> ping
PONG

CLIENTS

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
# 最大连接数10000
# maxclients 10000

limits限制

# 最大内存
# maxmemory <bytes>

## 达到最大内存时清除策略
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
## 使用LUR算法移除key,只对设置了过期时间的键.LRU最近最少使用算法
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
## 使用LRU算法移除key
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
## 在过期集合中移除随机的key,只对设置了过期时间的键
# volatile-random -> Remove a random key among the ones with an expire set.
## 移除随机的key
# allkeys-random -> Remove a random key, any key.
## 移除那些ttl值最小的key,即那些最近要过期的key
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
## 不进行移除.针对写操作,只是返回错误信息   
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
# 默认配置是不清除,但是配置没有开启
# maxmemory-policy noeviction
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
# 设置样本你数量,LRU算法和最小TTL算法都并非是精确的算法,而是估算值,醉意可以设置样本的大小.redis默认会检查这么多个key并选择其中LRU的那个
# maxmemory-samples 5

append only mode追加

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
# 默认是关闭状态
appendonly no
# The name of the append only file (default: "appendonly.aof")
# 备份文件的名字
appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always
# 备份时机
# always:同步持久化,每次发生数据变更会被立即就到磁盘,性能较差但数据记录完整性比较好
# ererysec:出厂默认配置,异步操作,每秒记录,如果一秒内宕机,有数据丢失
# no:不追加
appendfsync everysec

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
# 重写时是否可以运用appendfsync,用默认no即可,保证数据安全性
no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
# 设置重写的基准值,此时是上次重写体积的100%,也就是体积翻一倍
auto-aof-rewrite-percentage 100
# 设置重写的基准值,此时是重写时日志要大于64MB
auto-aof-rewrite-min-size 64mb

常见配置redis.conf介绍

redis的配置文件_redis监听地址参数
redis的配置文件_redis监听地址参数
redis的配置文件_redis监听地址参数
redis的配置文件_redis监听地址参数
redis的配置文件_redis监听地址参数
redis的配置文件_redis监听地址参数

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
转载请注明出处: https://daima100.com/10052.html

(0)
上一篇 2023-01-26
下一篇 2023-01-26

相关推荐

  • Mysql数据库语言学习的路线

    Mysql数据库语言学习的路线对于我们数据库的学习,不管是测试人员还是开发人员以及我们的DBA来说重点都是SQL;但是我们的SQL可以分多少类型,学习重点又是在哪里呢,本文仅仅针对测试人员来展开说明: SQL:structure

    2023-05-02
    97
  • 上海市企业数据名录爬取采集与收集

    上海市企业数据名录爬取采集与收集2019年全年上海市新设立各类市场主体43.15万户。其中,新设企业36.76万户;新设个体工商户6.35万户;新设农民专业合作社349户。日均新设企业1476户。至年末,上海市共有各类市场主体27…

    2023-02-27
    102
  • Mysql-connector「建议收藏」

    Mysql-connector「建议收藏」Mysql-connector-java驱动版本问题 由于我的数据库版本是5.7.28 ,在使用java连接mysql时经常出现版本问题。 com.mysql.jdbc.Driver 是 mysql-

    2023-04-17
    124
  • Redis集群介绍[通俗易懂]

    Redis集群介绍[通俗易懂]什么是集群 能够对外提供相同服务的多台服务器组成的集合。 为什么要建立集群 1.从可用性角度考虑,如果只有一台机器提供服务,一旦出现故障,那么整个服务不可用。 2.从容量角度考虑,当服务访问量上升,单

    2023-06-14
    106
  • 用Python对列表进行排序

    用Python对列表进行排序Python中可以使用sort()方法对列表进行原地排序,也可以使用sorted()函数对列表进行临时排序。

    2024-03-22
    27
  • Python数字比较:如何比较Python数字并进行逻辑判断

    Python数字比较:如何比较Python数字并进行逻辑判断在Python中进行数字比较是很常见的操作,不仅可以进行简单的大小比较,还可以进行逻辑判断,例如判断一个数字是否在某个范围内。本文将从多个方面介绍Python数字比较。

    2024-02-20
    49
  • 测试在线答案查询:Python工程师必备利器

    测试在线答案查询:Python工程师必备利器在软件开发领域,需要进行各种各样的测试工作,其中就包括测试问题的答案。对于Python工程师来说,一个好用的在线答案查询工具是必不可少的。本文介绍了一个强大的Python库——WolframAlpha,它可以解决许多测试中出现的数学、物理、化学、天文学等问题。

    2023-12-24
    61
  • 官宣!Taier1.3新版本正式发布,新鲜功能抢先体验

    官宣!Taier1.3新版本正式发布,新鲜功能抢先体验2022年11月7日,Taier1.3版本正式发布! Taier 是一个大数据分布式可视化的DAG任务调度系统,旨在降低ETL开发成本、提高大数据平台稳定性,大数据开发人员可以在 Taier 直接进行

    2023-06-14
    101

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注