接下来,我们使用myshell备份数据库中表。在做这个实验时,我还是碰到一些小问题。
util.dump_tables([“test”],”t”,”datadump1″);
Traceback (most recent call last):
File “”, line 1, in
TypeError: Util.dump_tables: Argument #1 is expected to be a string
从报错信息看,每一个参数应是字符串。我们改一下参数util.dump_tables(“test”,”t”,”datadump1″);
Traceback (most recent call last):
File “”, line 1, in
TypeError: Util.dump_tables: Argument #2 is expected to be an array
从报错信息看,又提示第二个参数就是数组。我们改一下参数util.dump_tables(“test”,[“t”],”datadump1″);
Acquiring global read lock
Global read lock acquired
Initializing – done
1 tables and 0 views will be dumped.
Gathering information – done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata – done
Writing DDL – done
Writing table metadata – done
Starting data dump
120% (6 rows / ~5 rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Dump duration: 00:00:00s
Total duration: 00:00:00s
Schemas dumped: 1
Tables dumped: 1
Uncompressed data size: 23 bytes
Compressed data size: 41 bytes
Compression ratio: 0.6
Rows written: 6
Bytes written: 41 bytes
Average uncompressed throughput: 23.00 B/s
Average compressed throughput: 41.00 B/s
这次成功备份表数据了。

![ELKStack 实战之 Elasticsearch [一]](https://img.mryunwei.com/uploads/2023/05/20230504111649737.jpg)

