2015-08-08 20 views
7

Próbowałem skonfigurować 2-węzłowe ustawienie Cassandra i jest już całkiem gotowe. Węzły są wydaje podłączony jak pokazano poniżej:Zmiany w schemacie przekroczą limit czasu w Cassandrze

Datacenter: datacenter1 
======================= 
Status=Up/Down 
|/ State=Normal/Leaving/Joining/Moving 
-- Address  Load  Tokens  Owns Host ID        Rack 
UN 10.0.7.80 3.74 MB 256   ?  087ef42d-15d6-4cbc-9530-5415521ae7dc rack1 
UN 10.0.7.240 493.75 KB 256   ?  34d9098a-3397-4024-8dce-836001a8c929 rack1 

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless. 

1) Każda operacja typu DDL jest uzyskiwanie limit czasu tutaj:

cqlsh> CREATE KEYSPACE keyspace1 WITH replication = {'class':'SimpleStrategy', 'replication_factor' : 2}; 
OperationTimedOut: errors={}, last_host=127.0.0.1 

lub

cqlsh> drop keyspace keyspace2; 
OperationTimedOut: errors={}, last_host=127.0.0.1 

2) Chociaż operacje Wyżej dostaje czas oczekiwania , Widzę te keyspace po kilku chwilach, ale te nie są replikowane do innych, gdziekolwiek replication_factor = 2. Nie wiem, czy te błędy przekroczenia czasu są związane z replikacją, czy nie, jak próbowałem s mnie przez zatrzymanie jednego z 2 węzłów, ale otrzymałem ten sam błąd timeout nawet po.

Configuration file is as mentioned below : 
cluster_name: 'MarketplaceDB-Cluster' 
num_tokens: 256 

hinted_handoff_enabled: true 

max_hint_window_in_ms: 10800000 # 3 hours 

hinted_handoff_throttle_in_kb: 1024 

max_hints_delivery_threads: 2 

batchlog_replay_throttle_in_kb: 1024 


authenticator: AllowAllAuthenticator 

authorizer: AllowAllAuthorizer 

role_manager: CassandraRoleManager 

roles_validity_in_ms: 2000 

permissions_validity_in_ms: 1000 

partitioner: org.apache.cassandra.dht.Murmur3Partitioner 

data_file_directories: 
    - /var/lib/cassandra/data 

commitlog_directory: /var/lib/cassandra/commitlog 

disk_failure_policy: stop 

commit_failure_policy: stop 

key_cache_size_in_mb: 

# Default is 14400 or 4 hours. 
key_cache_save_period: 14400 

row_cache_size_in_mb: 0 

row_cache_save_period: 0 

counter_cache_size_in_mb: 

counter_cache_save_period: 7200 

commitlog_sync: periodic 
commitlog_sync_period_in_ms: 10000 

commitlog_segment_size_in_mb: 32 

# any class that implements the SeedProvider interface and has a 
# constructor that takes a Map<String, String> of parameters will do. 
seed_provider: 
    # Addresses of hosts that are deemed contact points. 
    # Cassandra nodes use this list of hosts to find each other and learn 
    # the topology of the ring. You must change this if you are running 
    # multiple nodes! 
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider 
     parameters: 
      # seeds is actually a comma-delimited list of addresses. 
      # Ex: "<ip1>,<ip2>,<ip3>" 
      - seeds: "127.0.0.1" 

# On the other hand, since writes are almost never IO bound, the ideal 
# number of "concurrent_writes" is dependent on the number of cores in 
# your system; (8 * number_of_cores) is a good rule of thumb. 
concurrent_reads: 32 
concurrent_writes: 32 
concurrent_counter_writes: 32 

memtable_allocation_type: heap_buffers 

#memtable_flush_writers: 8 

index_summary_capacity_in_mb: 

index_summary_resize_interval_in_minutes: 60 

trickle_fsync: false 
trickle_fsync_interval_in_kb: 10240 

storage_port: 7000 

ssl_storage_port: 7001 

listen_address: 10.0.7.80 

internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator 

start_native_transport: true 
# port for the CQL native transport to listen for clients on 
# For security reasons, you should not expose this port to the internet. Firewall it if needed. 
native_transport_port: 9042 
# The maximum threads for handling requests when the native transport is used. 
# This is similar to rpc_max_threads though the default differs slightly (and 
# there is no native_transport_min_threads, idle threads will always be stopped 
# after 30 seconds). 
native_transport_max_threads: 128 
# 
# The maximum size of allowed frame. Frame (requests) larger than this will 
# be rejected as invalid. The default is 256MB. 
# native_transport_max_frame_size_in_mb: 256 

# The maximum number of concurrent client connections. 
# The default is -1, which means unlimited. 
# native_transport_max_concurrent_connections: -1 

# The maximum number of concurrent client connections per source ip. 
# The default is -1, which means unlimited. 
# native_transport_max_concurrent_connections_per_ip: -1 

# Whether to start the thrift rpc server. 
start_rpc: true 

# The address or interface to bind the Thrift RPC service and native transport 
# server to. 
# 
# Set rpc_address OR rpc_interface, not both. Interfaces must correspond 
# to a single address, IP aliasing is not supported. 
# 
# Leaving rpc_address blank has the same effect as on listen_address 
# (i.e. it will be based on the configured hostname of the node). 
# 
# Note that unlike listen_address, you can specify 0.0.0.0, but you must also 
# set broadcast_rpc_address to a value other than 0.0.0.0. 
# 
# For security reasons, you should not expose this port to the internet. Firewall it if needed. 
# 
# If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address 
# you can specify which should be chosen using rpc_interface_prefer_ipv6. If false the first ipv4 
# address will be used. If true the first ipv6 address will be used. Defaults to false preferring 
# ipv4. If there is only one address it will be selected regardless of ipv4/ipv6. 
rpc_address: localhost 
# rpc_interface: eth1 
# rpc_interface_prefer_ipv6: false 

# port for Thrift to listen for clients on 
rpc_port: 9160 

# RPC address to broadcast to drivers and other Cassandra nodes. This cannot 
# be set to 0.0.0.0. If left blank, this will be set to the value of 
# rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must 
# be set. 
# broadcast_rpc_address: 1.2.3.4 

# enable or disable keepalive on rpc/native connections 
rpc_keepalive: true 

# Cassandra provides two out-of-the-box options for the RPC Server: 
# 
# sync -> One thread per thrift connection. For a very large number of clients, memory 
#   will be your limiting factor. On a 64 bit JVM, 180KB is the minimum stack size 
#   per thread, and that will correspond to your use of virtual memory (but physical memory 
#   may be limited depending on use of stack space). 
# 
# hsha -> Stands for "half synchronous, half asynchronous." All thrift clients are handled 
#   asynchronously using a small number of threads that does not vary with the amount 
#   of thrift clients (and thus scales well to many clients). The rpc requests are still 
#   synchronous (one thread per active request). If hsha is selected then it is essential 
#   that rpc_max_threads is changed from the default value of unlimited. 
# 
# The default is sync because on Windows hsha is about 30% slower. On Linux, 
# sync/hsha performance is about the same, with hsha of course using less memory. 
# 
# Alternatively, can provide your own RPC server by providing the fully-qualified class name 
# of an o.a.c.t.TServerFactory that can create an instance of it. 
rpc_server_type: sync 

# Uncomment rpc_min|max_thread to set request pool size limits. 
# 
# Regardless of your choice of RPC server (see above), the number of maximum requests in the 
# RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync 
# RPC server, it also dictates the number of clients that can be connected at all). 
# 
# The default is unlimited and thus provides no protection against clients overwhelming the server. You are 
# encouraged to set a maximum that makes sense for you in production, but do keep in mind that 
# rpc_max_threads represents the maximum number of client requests this server may execute concurrently. 
# 
# rpc_min_threads: 16 
# rpc_max_threads: 2048 

# uncomment to set socket buffer sizes on rpc connections 
# rpc_send_buff_size_in_bytes: 
# rpc_recv_buff_size_in_bytes: 

# Uncomment to set socket buffer size for internode communication 
# Note that when setting this, the buffer size is limited by net.core.wmem_max 
# and when not setting it it is defined by net.ipv4.tcp_wmem 
# See: 
# /proc/sys/net/core/wmem_max 
# /proc/sys/net/core/rmem_max 
# /proc/sys/net/ipv4/tcp_wmem 
# /proc/sys/net/ipv4/tcp_wmem 
# and: man tcp 
# internode_send_buff_size_in_bytes: 
# internode_recv_buff_size_in_bytes: 

# Frame size for thrift (maximum message length). 
thrift_framed_transport_size_in_mb: 15 

# Set to true to have Cassandra create a hard link to each sstable 
# flushed or streamed locally in a backups/ subdirectory of the 
# keyspace data. Removing these links is the operator's 
# responsibility. 
incremental_backups: false 

# Whether or not to take a snapshot before each compaction. Be 
# careful using this option, since Cassandra won't clean up the 
# snapshots for you. Mostly useful if you're paranoid when there 
# is a data format change. 
snapshot_before_compaction: false 

# Whether or not a snapshot is taken of the data before keyspace truncation 
# or dropping of column families. The STRONGLY advised default of true 
# should be used to provide data safety. If you set this flag to false, you will 
# lose data on truncation or drop. 
auto_snapshot: true 

# When executing a scan, within or across a partition, we need to keep the 
# tombstones seen in memory so we can return them to the coordinator, which 
# will use them to make sure other replicas also know about the deleted rows. 
# With workloads that generate a lot of tombstones, this can cause performance 
# problems and even exaust the server heap. 
# (http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets) 
# Adjust the thresholds here if you understand the dangers and want to 
# scan more tombstones anyway. These thresholds may also be adjusted at runtime 
# using the StorageService mbean. 
tombstone_warn_threshold: 1000 
tombstone_failure_threshold: 100000 

# Granularity of the collation index of rows within a partition. 
# Increase if your rows are large, or if you have a very large 
# number of rows per partition. The competing goals are these: 
# 1) a smaller granularity means more index entries are generated 
#  and looking up rows withing the partition by collation column 
#  is faster 
# 2) but, Cassandra will keep the collation index in memory for hot 
#  rows (as part of the key cache), so a larger granularity means 
#  you can cache more hot rows 
column_index_size_in_kb: 64 

# Log WARN on any batch size exceeding this value. 5kb per batch by default. 
# Caution should be taken on increasing the size of this threshold as it can lead to node instability. 
batch_size_warn_threshold_in_kb: 5 

# Fail any batch exceeding this value. 50kb (10x warn threshold) by default. 
batch_size_fail_threshold_in_kb: 50 

# Number of simultaneous compactions to allow, NOT including 
# validation "compactions" for anti-entropy repair. Simultaneous 
# compactions can help preserve read performance in a mixed read/write 
# workload, by mitigating the tendency of small sstables to accumulate 
# during a single long running compactions. The default is usually 
# fine and if you experience problems with compaction running too 
# slowly or too fast, you should look at 
# compaction_throughput_mb_per_sec first. 
# 
# concurrent_compactors defaults to the smaller of (number of disks, 
# number of cores), with a minimum of 2 and a maximum of 8. 
# 
# If your data directories are backed by SSD, you should increase this 
# to the number of cores. 
#concurrent_compactors: 2 

# Throttles compaction to the given total throughput across the entire 
# system. The faster you insert data, the faster you need to compact in 
# order to keep the sstable count down, but in general, setting this to 
# 16 to 32 times the rate you are inserting data is more than sufficient. 
# Setting this to 0 disables throttling. Note that this account for all types 
# of compaction, including validation compaction. 
compaction_throughput_mb_per_sec: 16 

# Log a warning when compacting partitions larger than this value 
compaction_large_partition_warning_threshold_mb: 100 

# When compacting, the replacement sstable(s) can be opened before they 
# are completely written, and used in place of the prior sstables for 
# any range that has been written. This helps to smoothly transfer reads 
# between the sstables, reducing page cache churn and keeping hot rows hot 
sstable_preemptive_open_interval_in_mb: 50 

# Throttles all outbound streaming file transfers on this node to the 
# given total throughput in Mbps. This is necessary because Cassandra does 
# mostly sequential IO when streaming data during bootstrap or repair, which 
# can lead to saturating the network connection and degrading rpc performance. 
# When unset, the default is 200 Mbps or 25 MB/s. 
# stream_throughput_outbound_megabits_per_sec: 200 

# Throttles all streaming file transfer between the datacenters, 
# this setting allows users to throttle inter dc stream throughput in addition 
# to throttling all network stream traffic as configured with 
# stream_throughput_outbound_megabits_per_sec 
# inter_dc_stream_throughput_outbound_megabits_per_sec: 

# How long the coordinator should wait for read operations to complete 
read_request_timeout_in_ms: 5000 
# How long the coordinator should wait for seq or index scans to complete 
range_request_timeout_in_ms: 10000 
# How long the coordinator should wait for writes to complete 
write_request_timeout_in_ms: 2000 
# How long the coordinator should wait for counter writes to complete 
counter_write_request_timeout_in_ms: 5000 
# How long a coordinator should continue to retry a CAS operation 
# that contends with other proposals for the same row 
cas_contention_timeout_in_ms: 1000 
# How long the coordinator should wait for truncates to complete 
# (This can be much longer, because unless auto_snapshot is disabled 
# we need to flush first so we can snapshot before removing the data.) 
truncate_request_timeout_in_ms: 60000 
# The default timeout for other, miscellaneous operations 
request_timeout_in_ms: 10000 

# Enable operation timeout information exchange between nodes to accurately 
# measure request timeouts. If disabled, replicas will assume that requests 
# were forwarded to them instantly by the coordinator, which means that 
# under overload conditions we will waste that much extra time processing 
# already-timed-out requests. 
# 
# Warning: before enabling this property make sure to ntp is installed 
# and the times are synchronized between the nodes. 
cross_node_timeout: false 

# Enable socket timeout for streaming operation. 
# When a timeout occurs during streaming, streaming is retried from the start 
# of the current file. This _can_ involve re-streaming an important amount of 
# data, so you should avoid setting the value too low. 
# Default value is 0, which never timeout streams. 
# streaming_socket_timeout_in_ms: 0 

# phi value that must be reached for a host to be marked down. 
# most users should never need to adjust this. 
# phi_convict_threshold: 8 

# endpoint_snitch -- Set this to a class that implements 
# IEndpointSnitch. The snitch has two functions: 
# - it teaches Cassandra enough about your network topology to route 
# requests efficiently 
# - it allows Cassandra to spread replicas around your cluster to avoid 
# correlated failures. It does this by grouping machines into 
# "datacenters" and "racks." Cassandra will do its best not to have 
# more than one replica on the same "rack" (which may not actually 
# be a physical location) 
# 
# IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER, 
# YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS 
# ARE PLACED. 
# 
# Out of the box, Cassandra provides 
# - SimpleSnitch: 
# Treats Strategy order as proximity. This can improve cache 
# locality when disabling read repair. Only appropriate for 
# single-datacenter deployments. 
# - GossipingPropertyFileSnitch 
# This should be your go-to snitch for production use. The rack 
# and datacenter for the local node are defined in 
# cassandra-rackdc.properties and propagated to other nodes via 
# gossip. If cassandra-topology.properties exists, it is used as a 
# fallback, allowing migration from the PropertyFileSnitch. 
# - PropertyFileSnitch: 
# Proximity is determined by rack and data center, which are 
# explicitly configured in cassandra-topology.properties. 
# - Ec2Snitch: 
# Appropriate for EC2 deployments in a single Region. Loads Region 
# and Availability Zone information from the EC2 API. The Region is 
# treated as the datacenter, and the Availability Zone as the rack. 
# Only private IPs are used, so this will not work across multiple 
# Regions. 
# - Ec2MultiRegionSnitch: 
# Uses public IPs as broadcast_address to allow cross-region 
# connectivity. (Thus, you should set seed addresses to the public 
# IP as well.) You will need to open the storage_port or 
# ssl_storage_port on the public IP firewall. (For intra-Region 
# traffic, Cassandra will switch to the private IP after 
# establishing a connection.) 
# - RackInferringSnitch: 
# Proximity is determined by rack and data center, which are 
# assumed to correspond to the 3rd and 2nd octet of each node's IP 
# address, respectively. Unless this happens to match your 
# deployment conventions, this is best used as an example of 
# writing a custom Snitch class and is provided in that spirit. 
# 
# You can use a custom Snitch by setting this to the full class name 
# of the snitch, which will be assumed to be on your classpath. 
endpoint_snitch: SimpleSnitch 

# controls how often to perform the more expensive part of host score 
# calculation 
dynamic_snitch_update_interval_in_ms: 100 
# controls how often to reset all host scores, allowing a bad host to 
# possibly recover 
dynamic_snitch_reset_interval_in_ms: 600000 
# if set greater than zero and read_repair_chance is < 1.0, this will allow 
# 'pinning' of replicas to hosts in order to increase cache capacity. 
# The badness threshold will control how much worse the pinned host has to be 
# before the dynamic snitch will prefer other replicas over it. This is 
# expressed as a double which represents a percentage. Thus, a value of 
# 0.2 means Cassandra would continue to prefer the static snitch values 
# until the pinned host was 20% worse than the fastest. 
dynamic_snitch_badness_threshold: 0.1 

# request_scheduler -- Set this to a class that implements 
# RequestScheduler, which will schedule incoming client requests 
# according to the specific policy. This is useful for multi-tenancy 
# with a single Cassandra cluster. 
# NOTE: This is specifically for requests from the client and does 
# not affect inter node communication. 
# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place 
# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of 
# client requests to a node with a separate queue for each 
# request_scheduler_id. The scheduler is further customized by 
# request_scheduler_options as described below. 
request_scheduler: org.apache.cassandra.scheduler.NoScheduler 

# Scheduler Options vary based on the type of scheduler 
# NoScheduler - Has no options 
# RoundRobin 
# - throttle_limit -- The throttle_limit is the number of in-flight 
#      requests per client. Requests beyond 
#      that limit are queued up until 
#      running requests can complete. 
#      The value of 80 here is twice the number of 
#      concurrent_reads + concurrent_writes. 
# - default_weight -- default_weight is optional and allows for 
#      overriding the default which is 1. 
# - weights -- Weights are optional and will default to 1 or the 
#    overridden default_weight. The weight translates into how 
#    many requests are handled during each turn of the 
#    RoundRobin, based on the scheduler id. 
# 
# request_scheduler_options: 
# throttle_limit: 80 
# default_weight: 5 
# weights: 
#  Keyspace1: 1 
#  Keyspace2: 5 

# request_scheduler_id -- An identifier based on which to perform 
# the request scheduling. Currently the only valid option is keyspace. 
# request_scheduler_id: keyspace 

# Enable or disable inter-node encryption 
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that 
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher 
# suite for authentication, key exchange and encryption of the actual data transfers. 
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode. 
# NOTE: No custom encryption options are enabled at the moment 
# The available internode options are : all, none, dc, rack 
# 
# If set to dc cassandra will encrypt the traffic between the DCs 
# If set to rack cassandra will encrypt the traffic between the racks 
# 
# The passwords used in these options must match the passwords used when generating 
# the keystore and truststore. For instructions on generating these files, see: 
# http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore 
# 
server_encryption_options: 
    internode_encryption: none 
    keystore: conf/.keystore 
    keystore_password: cassandra 
    truststore: conf/.truststore 
    truststore_password: cassandra 
    # More advanced defaults below: 
    #protocol: TLS 
    #algorithm: SunX509 
    #store_type: JKS 
    #cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] 
    # require_client_auth: false 

# enable or disable client/server encryption. 
client_encryption_options: 
    enabled: false 
    keystore: conf/.keystore 
    keystore_password: cassandra 
    # require_client_auth: false 
    # Set trustore and truststore_password if require_client_auth is true 
    # truststore: conf/.truststore 
    # truststore_password: cassandra 
    # More advanced defaults below: 
    # protocol: TLS 
    # algorithm: SunX509 
    # store_type: JKS 
    # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] 

# internode_compression controls whether traffic between nodes is 
# compressed. 
# can be: all - all traffic is compressed 
#   dc - traffic between different datacenters is compressed 
#   none - nothing is compressed. 
internode_compression: all 

# Enable or disable tcp_nodelay for inter-dc communication. 
# Disabling it will result in larger (but fewer) network packets being sent, 
# reducing overhead from the TCP protocol itself, at the cost of increasing 
# latency if you block for cross-datacenter responses. 
inter_dc_tcp_nodelay: false 

# TTL for different trace types used during logging of the repair process. 
tracetype_query_ttl: 86400 
tracetype_repair_ttl: 604800 

# UDFs (user defined functions) are disabled by default. 
# As of Cassandra 2.2, there is no security manager or anything else in place that 
# prevents execution of evil code. CASSANDRA-9402 will fix this issue for Cassandra 3.0. 
# This will inherently be backwards-incompatible with any 2.2 UDF that perform insecure 
# operations such as opening a socket or writing to the filesystem. 
enable_user_defined_functions: false 

# The default Windows kernel timer and scheduling resolution is 15.6ms for power conservation. 
# Lowering this value on Windows can provide much tighter latency and better throughput, however 
# some virtualized environments may see a negative performance impact from changing this setting 
# below their system default. The sysinternals 'clockres' tool can confirm your system's default 
# setting. 
windows_timer_interval: 1 

Odpowiedz

5

Operacje na schematach c * nie przechodzą przez zwykłą ścieżkę zapisu w Cassandrze. Są one replikowane wewnętrznie za pomocą oddzielnego mechanizmu.

Po sprawdzeniu nodetool describecluster prawdopodobnie zobaczysz, że każdy z twoich węzłów jest w innej wersji schematu. Ciągły restart powinien rozwiązać ten problem.

-2

Rozpocznij cqlsh z adresem IP jednego komputera.

tak: -

./cqlsh XXXX.XXXX.XXXX.XXXX

w trybie rozproszonym nie można uruchomić cqlsh jako localhost. musisz podać adres ip tego komputera, w którym chcesz uruchomić prompt cqlsh.

-2

Błąd czasu pracy w konsoli Cassandra, cqlsh. spróbuj zwiększyć wartości w cqlsh.py.

cd /opt/apache-cassandra-3.7/bin 
vi cqlsh.py 
DEFAULT_CONNECT_TIMEOUT_SECONDS = 600 and DEFAULT_REQUEST_TIMEOUT_SECONDS = 600