Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(shell): support resolve IP address in some shell commands #426

Merged
merged 20 commits into from
Nov 19, 2019

Conversation

Smityz
Copy link
Contributor

@Smityz Smityz commented Nov 18, 2019

What problem does this PR solve?

What is changed and how it works?

Add support resolve IP address in some shell commands

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
>> app temp -d -r
[parameters]
app_name  : temp
detailed  : true

[general]
app_name           : temp
app_id             : 2   
partition_count    : 8   
max_replica_count  : 3   

[replicas]
pidx  ballot  replica_count  primary                        secondaries                                                    
0     3       3/3            smilencer-OptiPlex-7040:34801  [smilencer-OptiPlex-7040:34803,smilencer-OptiPlex-7040:34802]  
1     3       3/3            smilencer-OptiPlex-7040:34802  [smilencer-OptiPlex-7040:34801,smilencer-OptiPlex-7040:34803]  
2     3       3/3            smilencer-OptiPlex-7040:34803  [smilencer-OptiPlex-7040:34802,smilencer-OptiPlex-7040:34801]  
3     3       3/3            smilencer-OptiPlex-7040:34801  [smilencer-OptiPlex-7040:34803,smilencer-OptiPlex-7040:34802]  
4     3       3/3            smilencer-OptiPlex-7040:34802  [smilencer-OptiPlex-7040:34801,smilencer-OptiPlex-7040:34803]  
5     3       3/3            smilencer-OptiPlex-7040:34803  [smilencer-OptiPlex-7040:34802,smilencer-OptiPlex-7040:34801]  
6     3       3/3            smilencer-OptiPlex-7040:34801  [smilencer-OptiPlex-7040:34803,smilencer-OptiPlex-7040:34802]  
7     3       3/3            smilencer-OptiPlex-7040:34802  [smilencer-OptiPlex-7040:34801,smilencer-OptiPlex-7040:34803]  

[nodes]
node                           primary  secondary  total  
smilencer-OptiPlex-7040:34801  3        5          8      
smilencer-OptiPlex-7040:34802  3        5          8      
smilencer-OptiPlex-7040:34803  2        6          8      
                               8        16         24     

[healthy]
fully_healthy_partition_count    : 8
unhealthy_partition_count        : 0
write_unhealthy_partition_count  : 0
read_unhealthy_partition_count   : 0

>>> app_disk temp -d -r
[parameters]
app_name  : temp
detailed  : true

[result]
app_name                            : temp
app_id                              : 2   
partition_count                     : 8   
max_replica_count                   : 3   
disk_used_for_primary_replicas(MB)  : 0.00
disk_used_for_all_replicas(MB)      : 0.00
partitions not counted              : 0/8 
replicas not counted                : 0/24

[details]
pidx  ballot  replica_count  primary                              secondaries                                                                
0     3       3/3            smilencer-OptiPlex-7040:34801(0,#0)  [smilencer-OptiPlex-7040:34803(0,#0),smilencer-OptiPlex-7040:34802(0,#0)]  
1     3       3/3            smilencer-OptiPlex-7040:34802(0,#0)  [smilencer-OptiPlex-7040:34801(0,#0),smilencer-OptiPlex-7040:34803(0,#0)]  
2     3       3/3            smilencer-OptiPlex-7040:34803(0,#0)  [smilencer-OptiPlex-7040:34802(0,#0),smilencer-OptiPlex-7040:34801(0,#0)]  
3     3       3/3            smilencer-OptiPlex-7040:34801(0,#0)  [smilencer-OptiPlex-7040:34803(0,#0),smilencer-OptiPlex-7040:34802(0,#0)]  
4     3       3/3            smilencer-OptiPlex-7040:34802(0,#0)  [smilencer-OptiPlex-7040:34801(0,#0),smilencer-OptiPlex-7040:34803(0,#0)]  
5     3       3/3            smilencer-OptiPlex-7040:34803(0,#0)  [smilencer-OptiPlex-7040:34802(0,#0),smilencer-OptiPlex-7040:34801(0,#0)]  
6     3       3/3            smilencer-OptiPlex-7040:34801(0,#0)  [smilencer-OptiPlex-7040:34803(0,#0),smilencer-OptiPlex-7040:34802(0,#0)]  
7     3       3/3            smilencer-OptiPlex-7040:34802(0,#0)  [smilencer-OptiPlex-7040:34801(0,#0),smilencer-OptiPlex-7040:34803(0,#0)]  

>>> nodes -r
[details]
address                        status  
smilencer-OptiPlex-7040:34801  ALIVE   
smilencer-OptiPlex-7040:34802  ALIVE   
smilencer-OptiPlex-7040:34803  ALIVE   

[summary]
total_node_count    : 3
alive_node_count    : 3
unalive_node_count  : 0

>>> server_info -r
COMMAND: server-info

CALL [meta-server] [localhost:34601] succeed: Pegasus Server 1.12.SNAPSHOT (79535a83b8bf972cb4e5e711cab8191c6beb069f) Release, Started at 2019-11-19 11:28:08
CALL [meta-server] [localhost:34602] succeed: Pegasus Server 1.12.SNAPSHOT (79535a83b8bf972cb4e5e711cab8191c6beb069f) Release, Started at 2019-11-19 11:28:08
CALL [meta-server] [localhost:34603] succeed: Pegasus Server 1.12.SNAPSHOT (79535a83b8bf972cb4e5e711cab8191c6beb069f) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34801] succeed: Pegasus Server 1.12.SNAPSHOT (79535a83b8bf972cb4e5e711cab8191c6beb069f) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34802] succeed: Pegasus Server 1.12.SNAPSHOT (79535a83b8bf972cb4e5e711cab8191c6beb069f) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34803] succeed: Pegasus Server 1.12.SNAPSHOT (79535a83b8bf972cb4e5e711cab8191c6beb069f) Release, Started at 2019-11-19 11:28:08

Succeed count: 6
Failed count: 0

>>> server_stat -r
COMMAND: server-stat

CALL [meta-server] [localhost:34601] succeed: replica*app.pegasus*manual.compact.enqueue.count=not_found, replica*app.pegasus*manual.compact.running.count=not_found, replica*app.pegasus*rdb.block_cache.memory_usage=not_found, replica*eon.replica_stub*closing.replica(Count)=not_found, replica*eon.replica_stub*disk.available.max.ratio=not_found, replica*eon.replica_stub*disk.available.min.ratio=not_found, replica*eon.replica_stub*disk.available.total.ratio=not_found, replica*eon.replica_stub*disk.capacity.total(MB)=not_found, replica*eon.replica_stub*opening.replica(Count)=not_found, replica*eon.replica_stub*replica(Count)=not_found, replica*eon.replica_stub*replicas.commit.qps=not_found, replica*eon.replica_stub*replicas.learning.count=not_found, replica*eon.replica_stub*shared.log.size(MB)=not_found, replica*server*memused.res(MB)=not_found, replica*server*memused.virt(MB)=not_found, zion*profiler*RPC_RRDB_RRDB_GET.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_GET.qps=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_GET.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_GET.qps=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.qps=not_found, zion*profiler*RPC_RRDB_RRDB_PUT.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_PUT.qps=not_found
CALL [meta-server] [localhost:34602] succeed: replica*app.pegasus*manual.compact.enqueue.count=not_found, replica*app.pegasus*manual.compact.running.count=not_found, replica*app.pegasus*rdb.block_cache.memory_usage=not_found, replica*eon.replica_stub*closing.replica(Count)=not_found, replica*eon.replica_stub*disk.available.max.ratio=not_found, replica*eon.replica_stub*disk.available.min.ratio=not_found, replica*eon.replica_stub*disk.available.total.ratio=not_found, replica*eon.replica_stub*disk.capacity.total(MB)=not_found, replica*eon.replica_stub*opening.replica(Count)=not_found, replica*eon.replica_stub*replica(Count)=not_found, replica*eon.replica_stub*replicas.commit.qps=not_found, replica*eon.replica_stub*replicas.learning.count=not_found, replica*eon.replica_stub*shared.log.size(MB)=not_found, replica*server*memused.res(MB)=not_found, replica*server*memused.virt(MB)=not_found, zion*profiler*RPC_RRDB_RRDB_GET.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_GET.qps=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_GET.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_GET.qps=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.qps=not_found, zion*profiler*RPC_RRDB_RRDB_PUT.latency.server=not_found, zion*profiler*RPC_RRDB_RRDB_PUT.qps=not_found
CALL [meta-server] [localhost:34603] succeed: memused_res(MB)=39, memused_virt(MB)=470, get_p99(ns)=0, get_qps=0, multi_get_p99(ns)=0, multi_get_qps=0, multi_put_p99(ns)=0, multi_put_qps=0, put_p99(ns)=0, put_qps=0, replica*app.pegasus*manual.compact.enqueue.count=not_found, replica*app.pegasus*manual.compact.running.count=not_found, replica*app.pegasus*rdb.block_cache.memory_usage=not_found, replica*eon.replica_stub*closing.replica(Count)=not_found, replica*eon.replica_stub*disk.available.max.ratio=not_found, replica*eon.replica_stub*disk.available.min.ratio=not_found, replica*eon.replica_stub*disk.available.total.ratio=not_found, replica*eon.replica_stub*disk.capacity.total(MB)=not_found, replica*eon.replica_stub*opening.replica(Count)=not_found, replica*eon.replica_stub*replica(Count)=not_found, replica*eon.replica_stub*replicas.commit.qps=not_found, replica*eon.replica_stub*replicas.learning.count=not_found, replica*eon.replica_stub*shared.log.size(MB)=not_found
CALL [replica-server] [smilencer-OptiPlex-7040:34801] succeed: manual_compact_enqueue_count=0, manual_compact_running_count=0, rdb_block_cache_memory_usage=0, closing_replica_count=0, disk_available_max_ratio=87, disk_available_min_ratio=87, disk_available_total_ratio=87, disk_capacity_total(MB)=367306, opening_replica_count=0, serving_replica_count=12, commit_throughput=0, learning_count=0, shared_log_size(MB)=0, memused_res(MB)=56, memused_virt(MB)=749, get_p99(ns)=0, get_qps=0, multi_get_p99(ns)=0, multi_get_qps=0, multi_put_p99(ns)=0, multi_put_qps=0, put_p99(ns)=0, put_qps=0
CALL [replica-server] [smilencer-OptiPlex-7040:34802] succeed: manual_compact_enqueue_count=0, manual_compact_running_count=0, rdb_block_cache_memory_usage=0, closing_replica_count=0, disk_available_max_ratio=87, disk_available_min_ratio=87, disk_available_total_ratio=87, disk_capacity_total(MB)=367306, opening_replica_count=0, serving_replica_count=12, commit_throughput=0, learning_count=0, shared_log_size(MB)=0, memused_res(MB)=55, memused_virt(MB)=749, get_p99(ns)=0, get_qps=0, multi_get_p99(ns)=0, multi_get_qps=0, multi_put_p99(ns)=0, multi_put_qps=0, put_p99(ns)=0, put_qps=0
CALL [replica-server] [smilencer-OptiPlex-7040:34803] succeed: manual_compact_enqueue_count=0, manual_compact_running_count=0, rdb_block_cache_memory_usage=0, closing_replica_count=0, disk_available_max_ratio=87, disk_available_min_ratio=87, disk_available_total_ratio=87, disk_capacity_total(MB)=367306, opening_replica_count=0, serving_replica_count=12, commit_throughput=0, learning_count=0, shared_log_size(MB)=0, memused_res(MB)=56, memused_virt(MB)=749, get_p99(ns)=0, get_qps=0, multi_get_p99(ns)=0, multi_get_qps=0, multi_put_p99(ns)=0, multi_put_qps=0, put_p99(ns)=0, put_qps=0

Succeed count: 6
Failed count: 0

>>> flush_log -r
COMMAND: flush-log

CALL [meta-server] [localhost:34601] succeed: Flush done.
CALL [meta-server] [localhost:34602] succeed: Flush done.
CALL [meta-server] [localhost:34603] succeed: Flush done.
CALL [replica-server] [smilencer-OptiPlex-7040:34801] succeed: Flush done.
CALL [replica-server] [smilencer-OptiPlex-7040:34802] succeed: Flush done.
CALL [replica-server] [smilencer-OptiPlex-7040:34803] succeed: Flush done.

Succeed count: 6
Failed count: 0

  • No code

Code changes

  • Has exported function/method change
  • Has exported variable/fields change
  • Has interface methods change
  • Has persistent data change

Side effects

  • Possible performance regression
  • Increased code complexity
  • Breaking backward compatibility

Related changes

  • Need to cherry-pick to the release branch
  • Need to update the documentation
  • Need to be included in the release note

@vagetablechicken
Copy link
Contributor

server_info -r
COMMAND: server-info

CALL [meta-server] [localhost:34601] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [meta-server] [localhost:34602] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [meta-server] [localhost:34603] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34801] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34802] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34803] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08

Succeed count: 6
Failed count: 0

这里为什么meta-server都没有变成hostname?

@Smityz
Copy link
Contributor Author

Smityz commented Nov 19, 2019

server_info -r
COMMAND: server-info
CALL [meta-server] [localhost:34601] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [meta-server] [localhost:34602] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [meta-server] [localhost:34603] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34801] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34802] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
CALL [replica-server] [smilencer-OptiPlex-7040:34803] succeed: Pegasus Server 1.12.SNAPSHOT (79535a8) Release, Started at 2019-11-19 11:28:08
Succeed count: 6
Failed count: 0

这里为什么meta-server都没有变成hostname?

localhost是被127.0.0.1解析出来的,smilencer-OptiPlex-7040:34803这个域名是被本机ip解析出来的

src/shell/main.cpp Show resolved Hide resolved
src/shell/commands/table_management.cpp Outdated Show resolved Hide resolved
src/shell/commands/table_management.cpp Outdated Show resolved Hide resolved
neverchanje
neverchanje previously approved these changes Nov 19, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants