Pandas学习笔记:Excel、CSV文件的读取与导出

在使用Pandas处理数据时,常见的读取数据的方式时从Excel或CSV文件中获取,另外有时也会需要将处理完的数据输出为Excel或CSV文件。今天就一起来学习下Pandas常见的文件读取与导出的方法。

加载Excel文件

在Pandas中,Excel文件读取方法是:pd.read_excel()。具体可传参数为:

1
pandas.read_excel(io, sheet_name=0, header=0, names=None, index_col=None, usecols=None, squeeze=False, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, parse_dates=False, date_parser=None, thousands=None, comment=None, skipfooter=0, convert_float=True, **kwds)

其中:

  • io:excel文件,可以是文件路径、文件网址、file-like对象、xlrd workbook
  • sheetname:返回指定的sheet,参数可以是字符串(sheet名)、整型(sheet索引)、list(元素为字符串和整型,返回字典{‘key’:’sheet’})、none(返回字典,全部sheet)
  • header:指定数据表的表头,参数可以是int、list of ints,即为索引行数为表头
  • names:返回指定name的列,参数为array-like对象。
  • index_col:设定索引的列,参数可以是int、list of ints
  • usecol:设定需要解析的列,默认为None,代表解析素有,如果直传一个int,代表解析到最后的那个列,如果传的是list则返回的是限定的列,比如:“A:E”或“A,C,E:F”
  • squeeze:如果解析的数据只包含一列数据,则返回一个Series,默认返回为DataFrame
  • dtype:可以制定每列的类型,示例:{‘a’: np.float64, ‘b’: np.int32}
  • engine:如果 io 不是缓冲区或路径,则必须设置 io。 可接受的值是 None 或 xlrd
  • converters:自定形式,设定对应的列要用的转换函数。
  • true_values:设定安歇为True值,不常用
  • false_values:设定哪些为False值,不常用
  • shiprows:需要跳过的行,list-like类型
  • nrows:要分析的行数
  • na_values:N/A值列表
  • parse_dates:传入的是list,将指定的类解析为date格式
  • date_parser:指定将输入的字符串转换为可变的时间数据。Pandas默认的数据读取格式是‘YYYY-MM-DD HH:MM:SS’。如需要读取的数据没有默认的格式,就要人工定义。
  • thousands:千位分格数字的解析
  • comment:设定注释标识,在注释内的内容不解析
  • skipfooter:跳过末尾行
  • convert_float:将小数位为0的float类型转为int
  • **kwds:不清楚

该函数返回pandas中的DataFrame或dict of DataFrame对象,利用DataFrame的相关操作即可读取相应的数据。

1
2
3
4
import pandas as pd 
excel_path = 'example.xlsx'
df = pd.read_excel(excel_path, sheet_name="Sheet1")
print(df.example_column_name)

该函数主要的参数为io、sheetname、header、names、encoding。encoding在上面的参数说明中没有介绍到,其主要功能是指定用何种编码(codecs 包中的标准字符集)读取数据。

例如读取文件时报如下错误:

1 UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0x84 in position 36: invalid start byte

解决办法为设置encoding=”utf_8_sig” 或 encoding=”cp500″ 或 encoding=”gbk”,需要自行进行尝试。

加载CSV文件

在Pandas中,Excel文件读取方法是:pd.read_csv()。具体可传参数为:

1 pandas.read_csv(filepath_or_buffer, sep=’, ‘, delimiter=None, header=’infer’, names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression=’infer’, thousands=None, decimal=b’.’, lineterminator=None, quotechar=’”‘, quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, doublequote=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)

与read_excel不同的参数有:

  • filepath_or_buffer:这里可以接受一个文件名,或者一个URL,也可以接受一个打开的文件句柄,或者其他任何提供了read方法的对象。
  • sep和delimiter:这两个参数是一个意思,delimiter是sep的别名;如果指定为\t(制表符)的话,就可以实现read_table的默认功能;支持使用正则表达式来匹配某些不标准的CSV文件
  • mangle_dupe_cols:将冲虚的列X,指定为1,X.2,…
  • skipinitialspace:在分隔符后跳过空格。
  • keep_default_na:在解析数据时是否要包含默认的 NaN 值。
  • na_filter:检测丢失的值标记(空字符串和 na 值的值)。 在没有 NAs 的数据中,通过过滤器 False 可以提高读取大文件的性能
  • verbose:指示放置在非数字列中的 NA 值的数目
  • skip_blank_lines:如果是真的,跳过空白行,而不是将其解释为 NaN 值。
  • infer_datetime_format:如果启用 True 和 parse_dates,Pandas将尝试推断列中的差异的时间字符串的格式,如果可以推断出来,则切换到更快的分析方法。 在某些情况下,这可以使解析速度提高5-10倍。
  • keep_date_col:解析出日期序列后,是否保留原来的列
  • dayfirst:日期格式,DD/MM哪个在前
  • iterator:返回 TextFileReader 对象用于迭代或 get chunk ()。
  • chunksize:返回 TextFileReader 对象进行迭代。 有关iterator和chunksize的更多信息,请参见 IO 工具文档
  • compression:用于磁盘数据的实时解压
  • decimal:识别有小数点的字符
  • lineterminator:字符将文件分隔程行,只有C解析器才有效
  • quotechar:用于表示引用项的开始和结束的字符。 引用的项可以包括分隔符,它将被忽略。
  • quoting:控制字段引用行为
  • escapechar:跳过的字符?
  • dialect:如果提供了这个参数,这个参数将覆盖以下参数的值(默认或不是) : 分隔符、双引号、 escapechar、 skipinitialspace、 quotechar 和引号。 如果需要重写值,将发布一个 ParserWarning。 详情请参阅 csv 方言文档。
  • tupleize_cols:在列上留下一个元组列表(默认情况是在列上转换为多索引)
  • error_bad_lines:有太多字段的行(例如 csv 行,有太多逗号)将默认引发异常,并且不会返回 DataFrame。 如果错误,那么这些”bad lines”将从返回的 DataFrame 中删除。
  • warn_bad_lines:如果错误错误行为是 False,并警告错误行是 True,那么对于每个”坏行”的警告将是输出。
  • doublequote:指定 quotechar 时,引用不是QUOTE_NONE,指示是否将两个连续的 quotechar 元素解释为一个单独的 quotechar 元素。
  • delim_whitespace:设定是否使用空白做字段区隔
  • low_memory:内部处理文件的块,导致较低的内存使用同时分析,但可能混合类型推理。 为了确保没有混合类型设置 False,或指定具有 dtype 参数的类型。 请注意,整个文件被读入一个单一的 DataFrame,使用 chunksize 或 iterator 参数将数据以块的形式返回。 (只有 c 解析器有效)
  • memory_map:如果为文件或缓冲区提供一个文件程序,将文件对象直接映射到内存中,并直接从内存访问数据。 使用此选项可以提高性能,因为不再有任何 i / o 开销。
  • float_precision:指定 c 引擎应用于浮点值的转换器。

加载CSV文件的方法同加载Excel类似。但是从性能上导入csv要比导入Excel快很多,收益推荐使用csv导入。但是如果 导入的csv的格式存在一些问题,则可能出现错行的问题。

另外,除了导入CSV、Excel外,Pandas还支持如下方式导入:

  • read_sql(query, connection_object):从SQL表/库导入数据
  • read_json(json_string):从JSON格式的字符串导入数据
  • DataFrame(dict):从字典对象导入数据,Key是列名,Value是数据
  • read_html(url):解析URL、字符串或者HTML文件,抽取其中的tables表格
  • read_clipboard():从你的粘贴板获取内容,并传给read_table()
  • read_table(filename):从限定分隔符的文本文件导入数据

这里有必要重点学习的是pd.read_sql(query, connection_object),规划将在后面的学习中分享。

导出到Excel文件

导出到Excel的方法非常的简单:

12345 import pandas as pd excel_path = ‘example.xlsx’df = pd.read_excel(excel_path, sheetname=None)df.to_excel(‘output.xlsx’)

具体导出方法还有众多参数:

1 DataFrame.to_excel(excel_writer, sheet_name=’Sheet1’, na_rep=’’, float_format=None, columns=None, header=True, index=True, index_label=None, startrow=0, startcol=0, engine=None, merge_cells=True, encoding=None, inf_rep=’inf’, verbose=True, freeze_panes=None)

参数含义为:

  • excel_writer:写入的目标excel文件,可以是文件路径、ExcelWriter对象;
  • sheet_name:被写入的sheet名称,string类型,默认为’sheet1′;
  • na_rep:缺失值表示,string类型;
  • float_format:浮点数的格式
  • columns:要写入的列
  • header:是否写表头信息,布尔或list of string类型,默认为True;
  • index:是否写行号,布尔类型,默认为True;
  • index_label:索引标签
  • startrow:开始行,其余会被舍弃
  • startcol:开始列,其余会被舍弃
  • engine:写入的引擎,可以是:excel.xlsx.writer, io.excel.xls.writer, io.excel.xlsm.writer
  • merge_cells:合并单元格配置
  • encoding:指定写入编码,string类型。
  • inf_rep:指定数学符号无穷在Excel的表示
  • verbose:未知
  • freeze_panes:冻结窗格

导出到CSV文件

导出方法同Excel类型,具体方法参数为:

1 DataFrame.to_csv(path_or_buf=None, sep=’, ‘, na_rep=’’, float_format=None, columns=None, header=True, index=True, index_label=None, mode=’w’, encoding=None, compression=None, quoting=None, quotechar=’”‘, line_terminator=’\n’, chunksize=None, tupleize_cols=None, date_format=None, doublequote=True, escapechar=None, decimal=’.’)

具体参数含义为(与“导入CSV文件”、“导出为Excel文件”重复的内容不再单独说明):

  • path_or_buf:写入的文件名或文件路径
  • mode:写入文件的模式,默认为w,改为a为追加。
  • date_format:时间格式设定

Pandas还支持的导出方式有:

  • to_sql(table_name, connection_object):导出数据到SQL表
  • to_json(filename):以Json格式导出数据到文本文件

df.to_sql将在后面的学习中再做分享。

参考

Pandas学习笔记:Excel、CSV文件的读取与导出

OpenSTF 配置

安装OpenSTF

  • mac下brew安装
  • mac下vagrant安装
  • ubuntu下docker安装
  • ubuntu下docker安装stf-app

macOS下安装OpenSTF

安装依赖

1
brew install graphicsmagick zeromq protobuf yasm pkg-config
  • rethinkdb 使用docker安装

安装OpenSTF

node v8.13.0

python 2.7.15

stf 3.4.0

1
npm install -g stf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
node-pre-gyp info check checked for "/Users/jinlong/.nvm/versions/node/v8.13.0/lib/node_modules/stf/node_modules/jpeg-turbo/lib/binding/node-v57-darwin-x64/jpegturbo.node" (not found)
node-pre-gyp http GET https://pre-gyp.s3.amazonaws.com/jpegturbo/v0.4.0/jpegturbo-v0.4.0-node-v57-darwin-x64.tar.gz
node-pre-gyp verb download using proxy url: "http://127.0.0.1:1087"
node-pre-gyp http 403 https://pre-gyp.s3.amazonaws.com/jpegturbo/v0.4.0/jpegturbo-v0.4.0-node-v57-darwin-x64.tar.gz
node-pre-gyp http 403 status code downloading tarball https://pre-gyp.s3.amazonaws.com/jpegturbo/v0.4.0/jpegturbo-v0.4.0-node-v57-darwin-x64.tar.gz (falling back to source compile with node-gyp)
node-pre-gyp verb command build [ 'rebuild' ]


npm WARN deprecated @slack/client@3.16.0: v3.x and lower are no longer supported. see migration guide: https://github.com/slackapi/node-slack-sdk/wiki/Migration-Guide-for-v4
npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
npm WARN deprecated boom@2.10.1: This version is no longer maintained. Please upgrade to the latest version.
npm WARN deprecated cryptiles@2.0.5: This version is no longer maintained. Please upgrade to the latest version.
npm WARN deprecated hoek@2.16.3: This version is no longer maintained. Please upgrade to the latest version.
npm WARN deprecated ejs@0.8.8: Critical security bugs fixed in 2.5.5


CXX(target) Release/obj.target/zmq/binding.o
In file included from ../binding.cc:29:
/usr/local/Cellar/zeromq/4.2.5/include/zmq_utils.h:42:32: warning: unknown warning group '-Werror',
ignored [-Wunknown-warning-option]
#pragma GCC diagnostic ignored "-Werror"
^
/usr/local/Cellar/zeromq/4.2.5/include/zmq_utils.h:45:9: warning: Warning: zmq_utils.h is
deprecated. All its functionality is provided by zmq.h. [-W#pragma-messages]
#pragma message( \
^
../binding.cc:999:15: warning: '~MessageReference' has a non-throwing exception specification but
can still throw [-Wexceptions]
throw std::runtime_error(ErrorMessage());
^
../binding.cc:997:18: note: destructor has a implicit non-throwing exception specification
inline ~MessageReference() {
^
../binding.cc:1205:11: warning: '~OutgoingMessage' has a non-throwing exception specification but
can still throw [-Wexceptions]
throw std::runtime_error(ErrorMessage());
^
../binding.cc:1203:14: note: destructor has a implicit non-throwing exception specification
inline ~OutgoingMessage() {
^
4 warnings generated.
SOLINK_MODULE(target) Release/zmq.node
ld: warning: directory not found for option '-L/opt/local/lib'

启动

1
stf local

STFService.apk 位置

1
.nvm/versions/node/v8.11.3/lib/node_modules/stf/vendor/STFService/STFService.apk

ERR/provider 87127 [*] Device worker “67e2906f” died with code 1

https://github.com/openstf/stf/blob/master/vendor/STFService/STFService.apk

开发者设置->打开USB调试(安全设置)允许通过USB调试修改权限或模拟点击

1
2
3
4
adb uninstall jp.co.cyberagent.stf
adb install STFService.apk
adb shell am start -n jp.co.cyberagent.stf/.IdentityActivity
adb shell am startservice -n jp.co.cyberagent.stf/.Service

Ubuntu下使用Docker安装

配置docker

1
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
1
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
1
2
3
sudo apt update

apt-cache policy docker-ce

安装docker

1
sudo apt install docker-ce

启动docker

1
sudo systemctl status docker

配置openstf

拉取镜像

openstf/stf

1
docker pull openstf/stf:v3.4.0

openstf/ambassador

1
docker pull openstf/ambassador:latest

rethinkdb

1
docker pull rethinkdb:2.3.6

sorccu/adb

1
docker pull sorccu/adb:latest

nginx

1
docker pull nginx:1.15.7-alpine
1
2
3
for name in openstf/stf:v3.4.0 openstf/ambassador:latest rethinkdb:2.3.6 sorccu/adb:latest nginx:1.15.7-alpine; do
docker pull $name
done

解决Error response from daemon: Get https://registry-1.docker.io/v2

启动镜像

  • 启动数据库
1
docker run -d --name rethinkdb-2.3.6 --net host -v /data/docker/rethinkdb:/data rethinkdb:2.3.6
1
2
docker stop rethinkdb-2.3.6
docker rm rethinkdb-2.3.6
  • 启动adb service
1
docker run -d --name adbd --privileged --net host -v /dev/bus/usb:/dev/bus/usb sorccu/adb:latest
1
2
3
docker stop adbd
docker rm adbd
docker start adbd
  • 启动stf 启动的时配置的IP地址为你服务器的ip
1
docker run -d --name stf-3.4.0 --net host openstf/stf:v3.4.0 stf local --public-ip 10.8.8.118
1
2
3
docker stop stf-3.4.0
docker rm stf-3.4.0
docker start stf-3.4.0

STF 正式环境 docker 化集群部署

在本地开发STF,需依次安装Node.js,ADB ,RethinkDB,GraphicsMagick,ZeroMQ ,Protocol Buffers,yasm, pkg-config环境,如果部署到生产环境,每增加一台机器节点,都安装这些环境,那将是一个非常痛苦的过程。STF官方也推荐使用docker来搭建STF环境,服务器只需要安装docker,然后依次拉取以下镜像: docker pull openstf/stf:latest docker pull sorccu/adb:latest docker pull rethinkdb:latest docker pull openstf/ambassador:latest docker pull nginx:latest openstf/stf是stf的主镜像,sorccu/adb是adb工具,如果本地服务器已经有adb环境,可以不要此镜像,rethinkdb是stf数据库镜像,openstf/ambassador为网络代理工具,nginx是一个web服务器反向代理工具, stf依赖它将不同的url转发到不同模块上,没有nginx,生产环境中的stf是肯定不能正常工作的。拉取这些镜像以后,stf就可以直接在容器中运行了,不需要再安装stf的相关工具,openstf/stf镜像已经包含了所依赖的环境。 如果基于stf做了二次定制,要在服务器上进行docker部署,可不是一个stf local命令就开始使用了,需要以下步骤:

发布镜像

1
2
git clone https://github.com/openstf/stf.git
cd stf/docker

openstf/stf是STF官方push到docker官方仓库的镜像,如果进行了二次开发,理论上只需要替换该镜像即可。制作镜像首先需要DockerFile,在源码的根目录下,已经提供了现成的DockerFile,直接基于该File制作镜像,在DockerFile目录下,执行 sudo docker build -t dystf/stf:latest .其中dystf/stf为镜像名称,latest为镜像版本。网速好的情况下,10分钟内可以制作好,网速不好的情况下,半个小时以上甚至失败都是有可能的,这个时候需要不断地重试,执行完后,执行sudo doker images可查看刚制作好的镜像。如果新发布的镜像有问题,可以快速恢复到上个版本。每次修改代码以后,基于主服务器重新制定新的镜像,可将该镜像保存,然后可以分发到其他服务器上,其他服务器不需要再重新制定,方便快捷,sudo docker save 镜像ID> dystf_image.tar

集群化部署

如果只有10-15个设备的数据量,只需要一台服务器节点,直接启动stf local命令就可以了。当然实际生产环境中,绝对提供不止十几台的数量,有可能达到上百台,上百台的话,至少需要 10台服务器节点,一台服务器节点一般10台手机。所以只有通过集群部署来解决该问题,而且能够达到快速扩展新节点目的。 STF包含多个独立运行的进程,这些进程之间通过ZeroMQ和Protocol Buffers通信,每个进程节点叫做“unit”,这些单元可以分别部署到不同的服务器上,通过nigix配置来转发,详情可以看官方文档介绍。在我们的远程真机部署环境中,选了一台性能较好的服务器作为主节点,该服务器不连任何设备,provider节点作为设备的提供agent,它将设备上报给主服务器,并展示。

provider服务器节点部署以下单元模块

  • adbd.service(adb环境)
  • stf-provider@.service

stf-provider的作用就是给主框架提供手机,provider可以运行在同一台机器,也可以运行在其他机器,但是,这台机器一定要可直接访问的,因为手机屏幕其实就是从这里出来的。在provider机器上一定要先由adb的环境,毕竟安卓手机要依赖adb调试。

添加新provider节点

添加新的设备,需要增加新的服务器节点,新的节点只需几分钟便可接入完成:

  • 第一步:安装docker环境,adb环境。

  • 第二部:导入镜像,在上面第一步中,将主服务器已经发布好的镜像dystf_image.tar scp 到该服务器上,然后导入该镜像: sudo docker load < /Users/mtp/dystf_image.tar 刚导入的镜像没有名字和版本,需要对其打标,dystf/stf:latest为打标的名字和版本 sudo docker tag $(sudo docker images --filter "dangling=true" -q) dystf/stf:latest

  • 第三部:启动stf-provider,如上命令。

  • 第四部:配置nginx 在主服务器的nginx.conf文件下,添加刚启动的provider节点配置,添加:

    1
    2
    3
    4
    5
    6
    7
    location ~ "^/d/新节点启动名称,如上例的agentX/(/+)/(?[0-9]{5})/$" { 
    proxy_pass http://新节点IP2:$port/; proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header X-Real-IP $remote_addr;
    }

    配置完成后,重启ngnix,新节点所连接的设备即可上报给主服务器。

STF Setup Examples using Vagrant and Docker

1
2
git clone https://github.com/openstf/setup-examples.git
cd stf-setup-examples

Create Rethinkdb Cluster

Lets create rethinkdb cluster. Go to the db folder and run vagrant up. Yeah, thats it.

1
cd ./db; vagrant up

Deployment OpenStf

IP address

  • app role、database role: 10.8.8.128
  • provider1: 10.8.8.131

Database role

The database role requires the following units, UNLESS you already have a working RethinkDB server/cluster running somewhere. In that case you simply will not have this role, and should point your rethinkdb-proxy-28015.service to that server instead.

App role

The app role can contain any of the following units. You may distribute them as you wish, as long as the assumptions above hold. Some units may have more requirements, they will be listed where applicable.

Provider role

The provider role requires the following units, which must be together on a single or more hosts.

App role配置

启动nignx容器

1
2
3
4
5
mkdir /data/stf/nginx/ -p

cd /data/stf/nginx/

vim nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
#daemon off;
worker_processes 4;

events {
worker_connections 1024;
}

http {
upstream stf_app {
server 10.8.8.128:3100 max_fails=0;
}

upstream stf_auth {
server 10.8.8.128:3101 max_fails=0;
}

upstream stf_storage_apk {
server 10.8.8.128:3102 max_fails=0;
}

upstream stf_storage_image {
server 10.8.8.128:3103 max_fails=0;
}

upstream stf_storage {
server 10.8.8.128:3104 max_fails=0;
}

upstream stf_websocket {
server 10.8.8.128:3105 max_fails=0;
}

upstream stf_api {
server 10.8.8.128:3106 max_fails=0;
}

types {
application/javascript js;
image/gif gif;
image/jpeg jpg;
text/css css;
text/html html;
}

map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}

server {
listen 80;
server_name stf.ovwane.com;
keepalive_timeout 70;
# resolver 114.114.114.114 8.8.8.8 valid=300s;
# resolver_timeout 10s;

# Handle stf-provider@floor1.service
location ~ "^/d/floor1/([^/]+)/(?<port>[0-9]{5})/$" {
proxy_pass http://10.8.8.131:$port/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
}

# Handle stf-provider@floor2.service
location ~ "^/d/floor2/([^/]+)/(?<port>[0-9]{5})/$" {
proxy_pass http://10.8.8.132:$port/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
}

location /auth/ {
proxy_pass http://stf_auth/auth/;
}

location /api/ {
proxy_pass http://stf_api/api/;
}

location /s/image/ {
proxy_pass http://stf_storage_image;
}

location /s/apk/ {
proxy_pass http://stf_storage_apk;
}

location /s/ {
client_max_body_size 1024m;
client_body_buffer_size 128k;
proxy_pass http://stf_storage;
}

location /socket.io/ {
proxy_pass http://stf_websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $http_x_real_ip;
}

location / {
proxy_pass http://stf_app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $http_x_real_ip;
}
}
}

启动nginx

1
docker run -d --name stf-nginx-1.15.7 --net host -v /data/stf/nginx/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.15.7-alpine

启动rethinkdb

1
docker run -d --name stf-rethinkdb-2.3.6 -v /data/stf/rethinkdb:/data -e "AUTHKEY=RETHINKDBAUTHKEYANY" --net host rethinkdb:2.3.6 rethinkdb --bind all --cache-size 8192 --http-port 8090 --no-update-check

启动stf-migrate 初始化数据库表

1
docker run -d --name stf-migrate-3.4.0 --net host openstf/stf:v3.4.0 stf migrate

启动stf-app

1
docker run -d --name stf-app-3.4.0 --net host -e "SECRET=RETHINKDBAUTHKEYANY" openstf/stf:v3.4.0 stf app --port 3100 --auth-url $STF_URL/auth/mock/ --websocket-url ws://stf.ovwane.com/

启动stf-auth

1
docker run -d --name stf-auth-3.4.0 --net host -e "SECRET=RETHINKDBAUTHKEYANY" openstf/stf:v3.4.0 stf auth-mock --port 3101 --app-url http://stf.ovwane.com/

启动 stf-app和stf-auth之后就可以登录了。

启动stf-websocket

1
docker run -d --name stf-websocket-3.4.0 --net host -e "SECRET=RETHINKDBAUTHKEYANY" openstf/stf:v3.4.0 stf websocket --port 3105 --storage-url http://stf.ovwane.com/ --connect-sub tcp://stf.ovwane.com:7150 --connect-push tcp://stf.ovwane.com:7170

启动stf-api

1
docker run -d --name stf-api-3.4.0 --net host -e "SECRET=RETHINKDBAUTHKEYANY" openstf/stf:v3.4.0 stf api --port 3106 --connect-sub tcp://stf.ovwane.com:7150 --connect-push tcp://stf.ovwane.com:7170

启动stf-storage-plugin-apk

1
docker run -d --name stf-storage-plugin-apk-3.4.0 --net host openstf/stf:v3.4.0 stf storage-plugin-apk --port 3102 --storage-url http://stf.ovwane.com/

启动stf-storage-plugin-image

1
docker run -d --name stf-storage-plugin-image-3.4.0 --net host openstf/stf:v3.4.0 stf storage-plugin-image --port 3103 --storage-url http://stf.ovwane.com/

启动stf-storage-temp

1
docker run -d --name stf-storage-temp-3.4.0 --net host openstf/stf:v3.4.0 stf storage-temp --port 3104 --save-dir /data

启动stf-triproxy-app

1
docker run -d --name stf-triproxy-app-3.4.0 --net host openstf/stf:v3.4.0 stf triproxy app --bind-pub "tcp://*:7150" --bind-dealer "tcp://*:7160" --bind-pull "tcp://*:7170"

启动stf-processor

1
docker run -d --name stf-processor-3.4.0 --net host openstf/stf:v3.4.0 stf processor stf-processer --connect-app-dealer tcp://stf.ovwane.com:7160 --connect-dev-dealer tcp://stf.ovwane.com:7260

启动stf-triproxy-dev

1
docker run -d --name stf-triproxy-dev-3.4.0 --net host openstf/stf:v3.4.0 stf triproxy dev --bind-pub "tcp://*:7250" --bind-dealer "tcp://*:7260" --bind-pull "tcp://*:7270"

启动stf-reaper

1
docker run -d --name stf-reaper-3.4.0 --net host openstf/stf:v3.4.0 stf reaper dev --connect-push tcp://stf.ovwane.com:7270 --connect-sub tcp://stf.ovwane.com:7150 --heartbeat-timeout 30000

启动stf-log-rethinkdb(可选安装)

1
docker run -d --name stf-log-rethinkdb-3.4.0 --net host openstf/stf:v3.4.0 stf log-rethinkdb --connect-sub tcp://devside.stf.ovwane.com:7150

devside或者appside

Provider role配置

每台provider都要启动adbd和stf-provider。

启动adbd

1
docker run -d --name adbd --privileged --net host -v /dev/bus/usb:/dev/bus/usb sorccu/adb:latest

启动stf-provider1

1
docker run -d --name stf-provider-3.4.0-1 --net host openstf/stf:v3.4.0 stf provider --name "provider-1" --connect-sub tcp://devside.stf.ovwane.com:7250 --connect-push tcp://devside.stf.ovwane.com:7270 --storage-url http://stf.ovwane.com --public-ip provider1.stf.ovwane.com --min-port=15000 --max-port=25000 --heartbeat-interval 20000 --screen-ws-url-pattern "ws://stf.ovwane.com/d/floor1/<%= serial %>/<%= publicPort %>/"

启动stf-provider2

1
docker run -d --name stf-provider-3.4.0-2 --net host openstf/stf:v3.4.0 stf provider --name "provider-2" --connect-sub tcp://devside.stf.ovwane.com:7250 --connect-push tcp://devside.stf.ovwane.com:7270 --storage-url http://stf.ovwane.com --public-ip provider2.stf.ovwane.com --min-port=15000 --max-port=25000 --heartbeat-interval 20000 --screen-ws-url-pattern "ws://stf.ovwane.com/d/floor2/<%= serial %>/<%= publicPort %>/"

错误

1
2
3
4
5
6
7
8
9
10
11
root@ubuntu:~# docker logs -f stf_provider_stf-provider_1
2018-12-12T12:52:15.648Z INF/provider 1 [*] Subscribing to permanent channel "aA26iWR0Sv6dhk8e+lOKaw=="
2018-12-12T12:52:15.660Z INF/provider 1 [*] Sending output to "tcp://devside.stf.ovwane.com:7270"
2018-12-12T12:52:15.662Z INF/provider 1 [*] Receiving input from "tcp://devside.stf.ovwane.com:7250"
2018-12-12T12:52:15.668Z INF/provider 1 [*] Tracking devices
2018-12-12T12:52:16.546Z INF/provider 1 [*] Found device "27423f88" (offline)
2018-12-12T12:52:16.566Z INF/provider 1 [*] Registered device "27423f88"
2018-12-12T12:52:16.567Z INF/provider 1 [*] Lost device "27423f88" (offline)
2018-12-12T12:52:17.573Z INF/provider 1 [*] Found device "27423f88" (offline)
2018-12-12T12:52:17.591Z INF/provider 1 [*] Registered device "27423f88"
2018-12-12T12:52:17.592Z INF/provider 1 [*] Lost device "27423f88" (offline)

Ubuntu Docker 安装 OpenStf

IP address

  • proxy role: 10.8.8.128
  • database role: 10.8.8.128
  • app role: 10.8.8.128
  • provider1: 10.8.8.131

/etc/hosts

1
2
3
10.8.8.128 stf.ovwane.com
10.8.8.128 devside.stf.ovwane.com
10.8.8.131 provider1.stf.ovwane.com

Proxy role

Database role

The database role requires the following units, UNLESS you already have a working RethinkDB server/cluster running somewhere. In that case you simply will not have this role, and should point your [to that server instead.

App role

The app role can contain any of the following units. You may distribute them as you wish, as long as the assumptions above hold. Some units may have more requirements, they will be listed where applicable.

Provider role

The provider role requires the following units, which must be together on a single or more hosts.

Proxy role配置

拉取镜像

1
docker pull nginx:1.15.7-alpine

启动nignx容器

1
2
3
/data/nginx/conf/nginx.conf

vim stf.域名.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
#daemon off
worker_processes 4;

events {
worker_connections 1024;
}

http {
upstream stf_app {
server 10.8.8.128:3100 max_fails=0;
}

upstream stf_auth {
server 10.8.8.128:3101 max_fails=0;
}

upstream stf_storage_apk {
server 10.8.8.128:3102 max_fails=0;
}

upstream stf_storage_image {
server 10.8.8.128:3103 max_fails=0;
}

upstream stf_storage {
server 10.8.8.128:3104 max_fails=0;
}

upstream stf_websocket {
server 10.8.8.128:3105 max_fails=0;
}

upstream stf_api {
server 10.8.8.128:3106 max_fails=0;
}

types {
application/javascript js;
image/gif gif;
image/jpeg jpg;
text/css css;
text/html html;
}

map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}

server {
listen 80;
server_name stf.ovwane.com;
keepalive_timeout 70;
# resolver 114.114.114.114 8.8.8.8 valid=300s;
# resolver_timeout 10s;

# Handle stf-provider@floor1.service
location ~ "^/d/provider1/([^/]+)/(?<port>[0-9]{5})/$" {
proxy_pass http://10.8.8.131:$port/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
}

# Handle stf-provider@floor2.service
location ~ "^/d/provider2/([^/]+)/(?<port>[0-9]{5})/$" {
proxy_pass http://10.8.8.132:$port/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
}

location /auth/ {
proxy_pass http://stf_auth/auth/;
}

location /api/ {
proxy_pass http://stf_api/api/;
}

location /s/image/ {
proxy_pass http://stf_storage_image;
}

location /s/apk/ {
proxy_pass http://stf_storage_apk;
}

location /s/ {
client_max_body_size 1024m;
client_body_buffer_size 128k;
proxy_pass http://stf_storage;
}

location /socket.io/ {
proxy_pass http://stf_websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $http_x_real_ip;
}

location / {
proxy_pass http://stf_app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $http_x_real_ip;
}
}
}

启动nginx

1
docker run -d --name stf-nginx-1.15.7 --net host -v /data/stf/nginx/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.15.7-alpine

Database role配置

拉取镜像

1
docker pull rethinkdb:2.3.6

启动rethinkdb

1
2
3
# docker run -d --name stf-rethinkdb-2.3.6 -v /data/stf/rethinkdb:/data -e "AUTHKEY=RETHINKDBAUTHKEYANY" --net host rethinkdb:2.3.6 rethinkdb --bind all --cache-size 8192 --http-port 8090 --no-update-check

docker run -d --name stf-rethinkdb-2.3.6 -v /data/stf/rethinkdb:/data --net host rethinkdb:2.3.6 rethinkdb --bind all --cache-size 8192 --http-port 8090 --no-update-check

App role配置

拉取镜像

1
2
3
for name in openstf/stf:v3.4.0 openstf/ambassador:latest; do
docker pull $name
done

添加变量

1
2
3
4
5
STF_VERSION=3.4.0 \
STF_HOST=stf.域名 \
SFT_URL=https://$STF_HOST \
STF_IMAGE=openstf/stf:v$STF_VERSION \
STF_SECRET=RETHINKDBAUTHKEYANY

启动stf-migrate 初始化数据库表

1
docker run -d --name stf-migrate-$STF_VERSION --net host $STF_IMAGE stf migrate

启动stf-app

1
docker run -d --name stf-app-$STF_VERSION --net host -e "SECRET=${STF_SECRET}" $STF_IMAGE stf app --port 3100 --auth-url $STF_URL/auth/mock/ --websocket-url ws://$STF_HOST/

启动stf-auth

1
docker run -d --name stf-auth-$STF_VERSION --net host -e "SECRET=${STF_SECRET}" $STF_IMAGE stf auth-mock --port 3101 --app-url $STF_URL

启动 stf-app和stf-auth之后就可以登录了。

启动stf-websocket

1
docker run -d --name stf-websocket-$STF_VERSION --net host -e "SECRET=${STF_SECRET}" $STF_IMAGE stf websocket --port 3105 --storage-url $STF_URL --connect-sub tcp://$STF_HOST:7150 --connect-push tcp://$STF_HOST:7170

启动stf-api

1
docker run -d --name stf-api-$STF_VERSION --net host -e "SECRET=${STF_SECRET}" $STF_IMAGE stf api --port 3106 --connect-sub tcp://$STF_HOST:7150 --connect-push tcp://$STF_HOST:7170

启动stf-storage-plugin-apk

1
docker run -d --name stf-storage-plugin-apk-$STF_VERSION --net host $STF_IMAGE stf storage-plugin-apk --port 3102 --storage-url $SFT_URL

启动stf-storage-plugin-image

1
docker run -d --name stf-storage-plugin-image-$STF_VERSION --net host $STF_IMAGE stf storage-plugin-image --port 3103 --storage-url $SFT_URL

启动stf-storage-temp

1
docker run -d --name stf-storage-temp-$STF_VERSION --net host $STF_IMAGE stf storage-temp --port 3104 --save-dir /data

启动stf-triproxy-app

1
docker run -d --name stf-triproxy-app-$STF_VERSION --net host $STF_IMAGE stf triproxy app --bind-pub "tcp://*:7150" --bind-dealer "tcp://*:7160" --bind-pull "tcp://*:7170"

启动stf-processor

1
docker run -d --name stf-processor-$STF_VERSION --net host $STF_IMAGE stf processor stf-processer --connect-app-dealer tcp://$SFT_HOST:7160 --connect-dev-dealer tcp://$SFT_HOST:7260

启动stf-triproxy-dev

1
docker run -d --name stf-triproxy-dev-$STF_VERSION --net host $STF_IMAGE stf triproxy dev --bind-pub "tcp://*:7250" --bind-dealer "tcp://*:7260" --bind-pull "tcp://*:7270"

启动stf-reaper

1
docker run -d --name stf-reaper-$STF_VERSION --net host $STF_IMAGE stf reaper dev --connect-push tcp://$SFT_HOST:7270 --connect-sub tcp://$SFT_HOST:7150 --heartbeat-timeout 30000

启动stf-log-rethinkdb(可选安装)

1
docker run -d --name stf-log-rethinkdb-$STF_VERSION --net host $STF_IMAGE stf log-rethinkdb --connect-sub tcp://devside.$SFT_HOST:7150

devside或者appside

dc.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat > dc.sh<<EOF
#!/usr/bin/env bash

PWD=/root/stf_app

function stop(){
cd $PWD
docker-compose down
}

function start(){
cd $PWD
docker-compose up -d
}

\$1
EOF
1
chmod +x dc.sh

vim /etc/systemd/system/stf-app.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Unit]
Description=stf app
Wants=network.target
After=network.target

[Service]
Type=oneshot
ExecStartPre=/root/stf_app/dc.sh stop
ExecStart=/root/stf_app/dc.sh start
ExecStop=/root/stf_app/dc.sh start
RemainAfterExit=yes
StandardOutput=journal
StandardError=inherit

[Install]
WantedBy=multi-user.target

启动

1
2
3
4
systemctl enable stf-app.service
systemctl start stf-app.service
systemctl status stf-app.service
systemctl stop stf-app.service

Provider role配置

拉取镜像

1
2
3
for name in sorccu/adb:latest openstf/stf:v3.4.0; do
docker pull $name
done

每台provider都要启动adbd和stf-provider。

启动adbd

此方式添加新手机不能自动识别。

1
docker run -d --name adbd --privileged --net host -v /dev/bus/usb:/dev/bus/usb sorccu/adb:latest

使用ubuntu端adb

1
apt -y install adb

vim /etc/systemd/system/adbd.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Unit]
Description=Android Debug Bridge daemon
After=network.target

[Service]
#TimeoutStartSec=1min
Restart=always
RestartSec=2s
Type=forking
User=root
ExecStartPre=/usr/bin/adb kill-server
ExecStart=/usr/bin/adb start-server
ExecStop=/usr/bin/adb kill-server

[Install]
WantedBy=multi-user.target

启动

1
2
3
systemctl enable adbd.service
systemctl start adbd.service
systemctl status adbd.service

dc.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat > dc.sh<<EOF
#!/usr/bin/env bash

PWD=/root/stf_provider

function stop(){
cd $PWD
docker-compose down
}

function start(){
cd $PWD
docker-compose up -d
}

\$1
EOF
1
chmod +x dc.sh

vim /etc/systemd/system/stf-provider.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[Unit]
Description=stf provider
After=adbd.service

[Service]
Type=oneshot
ExecStartPre=/root/stf_provider/dc.sh stop
ExecStart=/root/stf_provider/dc.sh start
ExecStop=/root/stf_provider/dc.sh stop
RemainAfterExit=yes
StandardOutput=journal
StandardError=inherit

[Install]
WantedBy=multi-user.target

启动

1
2
3
4
systemctl enable stf-provider.service
systemctl start stf-provider.service
systemctl status stf-provider.service
systemctl stop stf-provider.service

启动stf-provider1

添加系统变量

1
2
3
4
5
STF_VERSION=3.4.0 \
STF_HOST=stf.域名 \
SFT_URL=https://$STF_HOST \
STF_IMAGE=openstf/stf:v$STF_VERSION \
STF_PROVIDER=provider1

启动stf-provider

1
docker run -d --name stf-($STF_PROVIDER)-$STF_VERSION --net host $STF_IMAGE stf provider --name "${STF_PROVIDER}" --connect-sub tcp://devside.$STF_HOST:7250 --connect-push tcp://devside.$STF_HOST:7270 --storage-url $SFT_URL --public-ip $STF_PROVIDER.$STF_HOST --min-port=15000 --max-port=25000 --heartbeat-interval 20000 --screen-ws-url-pattern "ws://${STF_HOST}/d/${STF_PROVIDER}/<%= serial %>/<%= publicPort %>/"

启动stf-provider2

添加系统变量

1
2
3
4
5
STF_VERSION=3.4.0 \
STF_HOST=stf.域名 \
SFT_URL=https://$STF_HOST \
STF_IMAGE=openstf/stf:v$STF_VERSION \
STF_PROVIDER=provider2

启动stf-provider

1
docker run -d --name stf-($STF_PROVIDER)-$STF_VERSION --net host $STF_IMAGE stf provider --name "${STF_PROVIDER}" --connect-sub tcp://devside.$STF_HOST:7250 --connect-push tcp://devside.$STF_HOST:7270 --storage-url $SFT_URL --public-ip $STF_PROVIDER.$STF_HOST --min-port=15000 --max-port=25000 --heartbeat-interval 20000 --screen-ws-url-pattern "ws://${STF_HOST}/d/${STF_PROVIDER}/<%= serial %>/<%= publicPort %>/"

参考

STF 折腾之路 最后换成 Docker 来安装

STF 正式环境 docker 化集群部署

STF docker 集群部署,树莓派做子节点,附带完整配置

Mac 上用 docker 安装 openstf–一步一坑从入门到放弃

OpenStf Deployment

STF docker 集群部署,树莓派做子节点,附带完整配置

STF 开发环境搭建与制作 docker 镜像过程

解决 openstf 只能识别75台设备的问题 – 尘缘的博客

1
2
3
4
5
6
> 跟踪到一个 issue,问题应该是可用端口数不足,一个设备默认需要4个端口,stf 默认的端口数是300,解决办法是修改参数,或者修改代码中的默认值。
>
> lib/cli/local/index.js
> ​```js
> .option('provider-max-port', { describe: 'Highest port number for device workers to use.' , type: 'number' , default: 7900 }) .option('provider-min-port', { describe: 'Lowest port number for device workers to use.' , type: 'number' , default: 7400 })
> ​

`

Docker Selenium Grid 配置

通过 Docker 方式启动

Docker images for Selenium Grid Server (Standalone, Hub, and Nodes)

拉取镜像

查看版本信息 docker-selenium

1
docker pull selenium/hub:3.14.0
1
docker pull selenium/node-chrome:3.14.0

启动主hub

1
docker run -d -P --name selenium-hub selenium/hub:3.14.0

启动分组node-chrome

1
docker run -d --link selenium-hub:hub selenium/node-chrome:3.14.0

–link 通过 link 关联 selenium-hub 容器,并为其设置了别名hub

通过 docker-compose 方式启动

docker-compose.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
version: "3"
services:
selenium-hub:
image: selenium/hub:3.14.0-iron
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.14.0-iron
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
firefox:
image: selenium/node-firefox:3.14.0-iron
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444

目前使用的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# To execute this docker-compose yml file use `docker-compose -f <file_name> up`
# Add the `-d` flag at the end for detached execution
version: "3"
services:
selenium-hub:
image: selenium/hub:3.141.59-lithium
# container_name: selenium-hub
environment:
- GRID_MAX_SESSION=10
# - newSessionWaitTimeout=25000
- JAVA_OPTS=-Xmx512m
# - SE_OPTS="-debug"
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome-debug:3.141.59-lithium
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
- NODE_MAX_INSTANCES=10
- NODE_MAX_SESSION=10
- SCREEN_WIDTH=1366
- SCREEN_HEIGHT=768
- SCREEN_DEPTH=24
ports:
- "5900:5900"
firefox:
image: selenium/node-firefox-debug:3.141.59-lithium
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
- NODE_MAX_INSTANCES=10
- NODE_MAX_SESSION=10
- SCREEN_WIDTH=1366
- SCREEN_HEIGHT=768
- SCREEN_DEPTH=24
ports:
- "5901:5900"

vnc 密码:secret

踩坑记

1.Docker selenium 中文乱码,未验证

3.141.59-lithium 这个版本打开 gbk 编码的网站没有问题,例如:“163.com”

Dockerfile

1
2
3
4
5
6
7
FROM selenium/node-chrome-debug

USER root

RUN apt-get update \
&& apt-get -y install ttf-wqy-microhei ttf-wqy-zenhei \
&& apt-get clean

构建

1
docker build -t selenium/node-chrome-debug-zh-cn .

2. 窗口最大化失败

在脚本中对浏览器进行最大化操作:driver.maximize_window()这个命令一向运行是没问题的,但是在docker 中却报错如下:
Message: unknown error: failed to change window state to maximized, current state is normal

原因:docker 运行 node 没有设置屏幕尺寸。

1
2
3
- SCREEN_WIDTH=1366
- SCREEN_HEIGHT=768
- SCREEN_DEPTH=24

或者:

查了一下,说是selenium 的bug。 找了一下,没有合适的解决方案,粗暴解决如下:

1
2
3
4
5
try:
driver.maximize_window()
except WebDriverException as e:
log.log().logger.info(e)
driver.set_window_size(1920, 1080) #如果最大化失败,设置窗口大小为 1920*1080

3. chrome option 不生效。

因为部分用例需要模拟移动设备,或设置浏览器为英文,所以使用 chrome option进行设置。 原来的初始化脚本如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
desired_caps_web = webdriver.DesiredCapabilities.CHROME
deviceList = ['Galaxy S5', 'Nexus 5X', 'Nexus 6P', 'iPhone 6', 'iPhone 6 Plus', 'iPad', 'iPad Pro']
if devicename!='' :
if devicename not in deviceList:
devicename = deviceList[2]
chrome_option = {
'args': ['lang=en_US','start-maximized'],
'extensions': [], 'mobileEmulation': {'deviceName': ''}

}
chrome_option['mobileEmulation']['deviceName'] = devicename
else:
chrome_option = {
'args': ['lang=en_US','--start-maximized'],
'extensions': []
}
desired_caps_web['goog:chromeOptions']=chrome_option
log.log().logger.info(desired_caps_web)
driver = webdriver.remote.webdriver.WebDriver(command_executor=server_url,desired_capabilities=desired_caps_web)

但同样,之前一直正常运行的脚本,到 docker 里不起作用。
看下docker selenium node 节点的log ,发现打印了如下信息:

1
Capabilities are: Capabilities {browserName: chrome, chromeOptions: {args: [lang=zh_CN.UTF-8],  mobileEmulation: {deviceName: iPhone 6}}, goog:chromeOptions: {}, javascriptEnabled: true, version: }

多了个 goog:chromeOptions {} 的配置项是怎么回事?

认真看下,Capabilities 里我设置的 chromeOptions 已经正确传进来了,但是后面的 goog:chromeOptions: {} 似乎覆盖了对应的配置。
尝试下把脚本里的参数名称从 “chromeOptions ” 改为 “goog:chromeOptions” ,奇迹出现了:

1
Capabilities are: Capabilities {browserName: chrome, goog:chromeOptions: {args: [lang=zh_CN.UTF-8], mobileEmulation: {deviceName: iPhone 6}}, javascriptEnabled: true, version: }

脚本也能正常运行了,对应的浏览器语言、移动设备模拟设置也已生效!

于是修改对应脚本为:

1
desired_caps_web['goog:chromeOptions']=chrome_option

问题解决!

手动创建 Chrome session 不自动回收

1
docker run --name selenium -d -p 5900:5900 -p 4444:4444 -e SE_OPTS="-timeout 31536000" -v /dev/shm:/dev/shm selenium/standalone-chrome-debug:3.141.59

SE_OPTS=”-timeout 31536000” 一年

https://stackoverflow.com/questions/45591976/how-to-terminate-session-in-selenium-gridextras

修改 VNC 密码

1
x11vnc -storepasswd <your-password-here> /home/seluser/.vnc/passwd

启动 vnc 服务

1
/opt/bin/start-vnc.sh

参考

SeleniumHQ/selenium

SeleniumHQ/docker-selenium

Selenium Conference

docker+selenium+python构建前端自动化分布式测试环境

Docker Selenium-虫师

docker+selenium grid+python实现分布式自动化测试: 解决中文乱码问题

docker+selenium 搭建和踩坑记录

Docker disconf配置

新建 docker-compose 文件

1
2
3
mkdir ~/docker/disconf

vim docker-compose.yaml

docker-compose.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
version: '3'
services:
disconf_redis_1:
image: daocloud.io/library/redis
restart: always
disconf_redis_2:
image: daocloud.io/library/redis
restart: always
disconf_zookeeper:
image: zookeeper:3.3.6
restart: always
disconf_mysql:
image: bolingcavalry/disconf_mysql:0.0.1
environment:
MYSQL_ROOT_PASSWORD: 123456
restart: always
disconf_tomcat:
image: bolingcavalry/disconf_tomcat:0.0.1
links:
- disconf_redis_1:redishost001
- disconf_redis_2:redishost002
- disconf_zookeeper:zkhost
- disconf_mysql:mysqlhost
restart: always
disconf_nginx:
image: bolingcavalry/disconf_nginx:0.0.1
links:
- disconf_tomcat:tomcathost
ports:
- "8820:80"
restart: always

运行

1
docker-compose up -d

停止整个环境的命令:

1
docker-compose stop1

删除整个环境的命令:

1
docker-compose rm

访问

用户名密码都是admin

参考

Docker搭建disconf环境,三部曲之一:极速搭建disconf

Docker搭建disconf环境,三部曲之二:本地快速构建disconf镜像

Docker搭建disconf环境,三部曲之三:细说搭建过程

Docker InfluxDB和Grafana配置

拉取镜像

1
docker pull samuelebistoletti/docker-statsd-influxdb-grafana:2.1.0

启动镜像

1
2
3
4
5
6
7
8
9
docker run --ulimit nofile=66000:66000 \
-d \
--name docker-statsd-influxdb-grafana-2.1.0 \
-p 3003:3003 \
-p 3004:8888 \
-p 8086:8086 \
-p 22022:22 \
-p 8125:8125/udp \
samuelebistoletti/docker-statsd-influxdb-grafana:2.1.0

关闭启动

1
2
3
docker stop docker-statsd-influxdb-grafana

docker start docker-statsd-influxdb-grafana

端口映射关系

Mapped Ports

1
2
3
4
5
6
7
Host		Container		Service

3003 3003 grafana
3004 8888 influxdb-admin (chronograf)
8086 8086 influxdb
8125 8125 statsd
22022 22 sshd

SSH

1
ssh root@localhost -p 22022

Password: root

Grafana

Open http://localhost:3003

1
2
Username: root
Password: root

Add data source on Grafana

  1. Using the wizard click on Add data source
  2. Choose a name for the source and flag it as Default
  3. Choose InfluxDB as type
  4. Choose direct as access
  5. Fill remaining fields as follows and click on Add without altering other fields
1
2
3
4
Url: http://localhost:8086
Database: telegraf
User: telegraf
Password: telegraf

Basic auth and credentials must be left unflagged. Proxy is not required.

Now you are ready to add your first dashboard and launch some query on database.

InfluxDB

Web Interface

Open http://localhost:3004

1
2
3
Username: root
Password: root
Port: 8086

InfluxDB Shell (CLI)

  1. Establish a ssh connection with the container
  2. Launch influx to open InfluxDB Shell (CLI)

数据收集

安装TeleGraf

1
brew install telegraf

设置

/etc/telegraf/telegraf.conf
修改influxdb地址,用户名及密码,设置hostname

1
telegraf -config /usr/local/etc/telegraf.conf

重启服务

1
brew services restart telegraf

导入Grafana Dashboard

下载最新版本的dashboard配置:
https://grafana.com/dashboards/1443/revisions

在grafana的新建dashboard并导入配置,完成。

注意

docker内部已经启动了telegraf,如果不需要的话可以停掉,在多台服务器上安装并配置Telegraf写入同一Influxdb就可以实现对集群进行系统监控。

参考

Docker Image with Telegraf (StatsD), InfluxDB and Grafana

Docker+Grafana+Influxdb+Telegraf安装部署

Telegraf+InfluxDB+Grafana搭建服务器监控平台

grafana + influxdb + telegraf , 构建性能监控平台

Grafana部署-展示Zabbix数据

iOS 自动化测试

Appium iOS WebDriverAgent 配置

安装iOS授权工具

1
npm install -g authorize-ios

安装工具

1
2
3
4
5
brew install libimobiledevice

brew install ios-deploy

brew install ios-webkit-debug-proxy

安装npm
安装Carthage

1
brew install carthage

拉取代码

1
2
3
cd ~/Xcode

git clone https://github.com/facebook/WebDriverAgent.git

下载依赖

1
2
3
4
5
cd WebDriverAgent

mkdir -p Resources/WebDriverAgent.bundle

./Scripts/bootstrap.sh -d

wda 位置

1
~/.nvm/versions/node/v8.13.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/WebDriverAgent

双击打开WebDriverAgent.xcodeproj这个文件。

修改配置

Build Settings->Product Bundle Identifier

WebDriverAgent->WebDriverAgentLib->Signing->Automatically manage signinng

WebDriverAgent->WebDriverAgentRunner->Build Settings->Product Bundle Identifier

查看设备

1
2
3
instruments -s devices
ios-deploy -c
libimobiledevice -l
  • Finally, you can verify that everything works. Build the project:
1
xcodebuild -project WebDriverAgent.xcodeproj -scheme WebDriverAgentRunner -destination 'id=<udid>' test

运行

WebDriverAgentRunner->Test方式运行。

Xcode终端会看到这个输出就证明运行成功**Runner[2424:2946752] ServerURLHere->http://127.0.0.1:8100<-ServerURLHere**

查看状态http://127.0.0.1:8100/status

查看元素位置http://127.0.0.1:8100/inspector

获取状态信息

1
http://localhost:8100/status

获取source

1
2
3
http://localhost:8100/source

http://127.0.0.1:8100/source?format=json

获取session详细信息

1
http://localhost:8100/session/D2BDF992-D087-4013-B354-05F48FC5A748/

获取source

1
http://localhost:8100/session/D2BDF992-D087-4013-B354-05F48FC5A748/source

获取截图

1
2
3
http://localhost:8100/session/D2BDF992-D087-4013-B354-05F48FC5A748/screenshot

http://127.0.0.1:8100/screenshot

FAQ

1
2
3
4
5
6
7
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated css-list@0.1.3: Deprecated.
npm WARN deprecated browserslist@0.4.0: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.

npm WARN react-dom@15.6.2 requires a peer of react@^15.6.2 but none is installed. You must install peer dependencies yourself.

npm WARN web-driver-inspector@1.0.0 No repository field.

iOS自动化测试的那些干货

facebook/xctool

1
brew install xctool

iOS Development Bridge

idb使用,以前叫fbsimctl

参考

Starting WebDriverAgent

iOS WebDriverAgent 环境搭建

iOS 真机如何安装 WebDriverAgent

iOS 真机调试如何安装 WebDriverAgent

Appium iOS 真机测试

线上班第六期_IOS 进阶 Webview 测试_20180603

线下第三期_iOS 真机测试_20180819

Appium XCUITest Driver Real Device Setup

ATX 文档 - iOS 真机如何安装 WebDriverAgent

JSONWP cannot find “wda/screen”

Docker 运行ELK

Elastic Stack

Elastic Stack and Product Documentation | Elastic

1
docker pull sebp/elk:642

运行

1
2
3
4
5
6
7
docker run -d \
--name elk-642 \
-e LOGSTASH_START=0 \
-e ES_HEAP_SIZE="1g" \
--network=elk \
-p 5601:5601 -p 9200:9200 -p 5044:5044 \
sebp/elk:642

单独安装

安装 elasticsearch

Install Elasticsearch with Docker | Elasticsearch Reference [6.5] | Elastic

1
docker pull elasticsearch:6.5.4

安装 elasticsearch-head 扩展

1
docker pull mobz/elasticsearch-head:5

安装 kibana

Running Kibana on Docker | Kibana User Guide [6.5] | Elastic

1
docker pull kibana:6.5.4

安装 logstash

1
docker pull logstash:6.5.4

安装 filebeat

1
docker pull filebeat:6.5.4

安装 metricbeat

1
docker pull docker.elastic.co/beats/metricbeat:6.5.4

docker-compose

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
version: '3'
services:
es-master:
depends_on:
- es-head
image: elasticsearch:6.5.4
container_name: es-master
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./conf/es-master.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- ~/docker/elk/es-master/data:/usr/share/elasticsearch/data
- ~/docker/elasticsearch/es-master/plugins:/usr/share/elasticsearch/plugins
ports:
- 9200:9200
- "9300:9300"
networks:
- esnet
es-head:
image: mobz/elasticsearch-head:5
container_name: es-head
restart: always
ports:
- "9100:9100"
networks:
- esnet
kibana:
depends_on:
- es-master
image: kibana:6.5.4
container_name: kibana
volumes:
- ./conf/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
ports:
- "5601:5601"
networks:
- esnet

networks:
esnet:

文件配置

1
mkdir conf

conf/es-master.yml

1
2
3
4
5
6
7
8
9
10
11
12
cluster.name: es-cluster
node.name: es-master
node.master: true
node.data: true
bootstrap.memory_lock: true
# discovery.zen.minimum_master_nodes: 1
# discovery.zen.ping.unicast.hosts: ["es-master", "es-node1"]
# discovery.zen.ping.unicast.hosts: ["es-master"]
network.host: es-master
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.monitoring.collection.enabled: true

conf/kibana.yml

1
2
3
4
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.url: http://es-master:9200
# xpack.monitoring.ui.container.elasticsearch.enabled: true

启动

1
docker-compose up -d

Inspect status of cluster

1
curl http://127.0.0.1:9200/_cat/health

http://localhost:9200/_cat/health?v

http://localhost:9200/_cat/health?format=json&bytes=b

安装插件

查看已安装的插件

1
elasticsearch-plugin list

IK Analysis Plugin (作者 Medcl)

简介:大名鼎鼎的ik分词,都懂的!

1
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.5.4/elasticsearch-analysis-ik-6.5.4.zip

测试

1
curl -XPUT http://localhost:9200/index
1
2
3
4
curl -XGET "http://localhost:9200/index/_analyze" -H 'Content-Type: application/json' -d'
{
"text":"中华人民共和国MN","tokenizer": "ik_max_word"
}'

参考

ElasticSearch安装(以Docker的方式)

docker-compose安装elasticsearch集群

elasticsearch 6.3.2 集群配置

Kibana + ElasticSearch

ElasticSearch常用插件整理

Elasticsearch-RTF

Running the Elastic Stack on Docker | Getting Started [7.5] | Elastic

macOS pip 配置 supervisor

macOS 10.13.6

etc目录在/usr/local/etc

1
2
mkdir -p /usr/local/etc/supervisor/
echo_supervisord_conf > /usr/local/etc/supervisor/supervisord.conf

vim /usr/local/etc/supervisor/supervisord.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[inet_http_server]         
port=127.0.0.1:9001

[supervisord]
logfile=/usr/local/etc/supervisor/supervisord.log
logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
loglevel=info ; log level; default info; others: debug,warn,trace
pidfile=/usr/local/etc/supervisor/supervisord.pid

[supervisorctl]
serverurl=http://127.0.0.1:9001
serverurl=unix:///usr/local/etc/supervisor/supervisor.sock

[include]
files = /usr/local/etc/supervisor/conf.d/*.conf

启动supervisor

1
supervisord -c /usr/local/etc/supervisor/supervisord.conf

macOS 配置 Macaca

安装node

iOS

XCTestWD

1
2
3
git clone https://github.com/macacajs/XCTestWD.git

cd XCTestWD/XCTestWD
1
2
3
4
5
xcodebuild -project XCTestWD.xcodeproj \
-scheme XCTestWDUITests \
-destination 'platform=iOS Simulator,name=iPhone 6' \
XCTESTWD_PORT=8001 \
clean test

Xcode

Driver Installation

1
npm i macaca-ios -g

debug log will be displayed when ‘–verbose’ is set as an argument when initiating macaca.

  • 查看TEAM_ID
1
DEVELOPMENT_TEAM_ID=TEAM_ID npm i macaca-ios -g

Android

  • JDK 1.8 (Java 9 is not supported)
  • Android SDK
  • Set the ANDROID_HOME environment variable to your ~/.bashrc, ~/.bash_profile, ~/.zshrc or

gradle is needed in order to build UIAutomatorWD and other package

1
brew install gradle
  • If you got a error like [You have not accepted the license agreements of the following SDK components] on your install command [npm i macaca-android -g],plz accept all Android SDK licenses uses command below, and retry install.
1
yes | $ANDROID_HOME/tools/bin/sdkmanager --licenses

ChromeDriver

Macaca Cli

1
npm i -g macaca-cli

Environment

Let’s check the version and verify the environment.

1
2
3
4
5
6
7
8
9
# electron
#npm install electron -g
npm install macaca-electron -g

# show version
macaca -v

# verify environment
macaca doctor

启动Macaca

1
macaca server --verbose

获取设备 ID

iOS

命令行方式

1
xcrun simctl list

这行命令会列出你的所以模拟器信息,里面有类似 XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX 的代码,就是模拟器 UDID。

从 Xcode 获取

打开模拟器,从菜单中打开 Hardware - devices - manage devices。 然后你会看到模拟器信息界面,里面有个 identifier,就是 UDID。

Android

命令行

先启动你的设备,然后使用 adb 命令查看设备信息:

1
adb devices

Macaca App Inspector

安装 Node

1
nvm install v8.16.0

安装 app-inspector

1
npm install app-inspector -g

模拟器就可以使用了。

启动 app-inspector

1
app-inspector -u YOUR-DEVICE-ID

打开 app-inspector 界面

你的命令行将输出如下的文字:

1
inspector start at: http://192.168.10.100:5678

浏览器里面打开输出的链接:http://127.0.0.1:5678。

推荐用 Chrome 浏览器。

iOS 真机调试

安装 usbmuxd 以便于通过 USB 通道测试 iOS 真机

1
brew install usbmuxd

安装

1
brew install libimobiledevice

安装 ideviceinstaller 用来给真机安装 App

1
brew install ideviceinstaller

应用中如含有 WebView,请安装 ios-webkit-debug-proxy

1
brew install ios-webkit-debug-proxy

设置环境变量

vim ~/.zshrc

1
TEAM_ID=C8V426QHXX

TEAM_ID 是开发者团队 id,十位数的字符串。

安装 app-inspector

1
DEVELOPMENT_TEAM_ID=TEAM_ID npm i app-inspector -g

使用 Xcode 修改 XCUITest 的证书

1
cd ~/.nvm/versions/node/v8.16.0/lib/node_modules/app-inspector/node_modules/xctestwd/XCTestWD

修改 Schema 为:

安装 iOS driver

1
DEVELOPMENT_TEAM_ID=TEAM_ID npm i macaca-ios -g

Docker 运行Macaca

macacajs/macaca-android-docker

1
docker pull macacajs/macaca-android-docker:latest

macacajs/macaca-datahub

1
docker pull macacajs/macaca-datahub:latest
1
docker run -d -p 9200:9200 -p 9300:9300 macacajs/macaca-datahub:latest

参考

Macaca Environment Setup

Macaca • 面向多端的自动化测试

,