重学Binder

Linux传统的进程间通信原理

Linux进程结构

进程隔离

保护系统中进程互不干扰。在操作系统中,进程之间数据是不互通的,相互之间无法访问数据,保证数据的安全星。

进程隔离的条件下,需要通过IPC(Inter Process Communication)机制进行进程间的通信。

进程空间划分

操作系统的核心是内核,独立于普通的应用程序,可以访问受保护的内存空间以及底层的硬件设备。

为了使用户进程不能操作内核,保证内核的安全性。所以操作系统将虚拟空间划分为两部分:

  • 内核空间(一般占1GB)

    系统内核运行的空间

  • 用户空间(一般占3GB)

    用于用户程序执行的空间

用户空间在不同进程之间不能共享,内核空间是各个进程之间共享的。

系统调用

用户空间的权限低于内核空间,导致用户空间无法直接访问内核资源(例如文件操作、网络访问等),就需要借助系统调用实现内核资源访问。

系统调用是用户空间访问内核的唯一方式,保证了所有资源访问都是在内核的控制下进行,避免用户程序对系统资源的越级访问,提升系统的安全和稳定性。

Linux采用两级保护机制:

  • 0级供系统内核使用
  • 3级供用户程序使用

当进程使用系统调用执行内核代码时,进程就进入了内核态,此时处理器处于0级·;当进程执行自己的代码时,进程就进入用户态,此时处理器位于3级·

系统调用主要通过以下两个函数实现:

  • copy_from_user:将数据从用户空间拷贝到内核空间
  • copy_to_user:将数据从内核空间拷贝到用户空间

传统IPC功能实现

Linux-IPC

  1. 发送进程通过系统调用 copy_from_user·把自己的内存缓存区(发送进程)的数据拷贝到内核缓存区
  2. 内核程序通过系统调用 copy_to_user把内核缓存区的数据拷贝到接收进程的内存缓存区

传统IPC通信过程中暴露了两个明显的缺点:

  1. 性能低下,需要经历两次数据拷贝过程:发送进程内存缓存区 -> 内核缓存区 -> 接收进程内存缓存区
  2. 空间、时间浪费,接收方需要事先开辟一块内存空间准备接受发送方的数据,由于不能确定数据的大小。所以只能开辟一块较大的空间(空间浪费)或者先行获取发送数据的大小(时间浪费)。

传统Linux进程通信手段

  • 管道
  • 消息队列
  • 共享内存:无需复制,共享缓冲区直接附加到进程的虚拟地址,速度快。但是无法解决进程间同步问题
  • 套接字(Socket):接口通用,但是传输效率低,主要用于不同机器间的通信
  • 信号量(semaphore):作为一种锁机制,防止某进程正在访问共享资源时,其他进程也在访问该资源。
  • 信号

Binder基本原理

动态内核可加载模块

模块是具有独立功能的程序,可以被单独编译但是无法独立运行。利用动态内核可加载模块机制,动态的添加一个内核模块到内核空间内,用户进程就可以通过这个模块实现通信。

在Android系统中,加载进内核空间的模块就是Binder驱动

内存映射-mmap

Binder驱动添加完毕后,就需要开始进程间通信。接下来就需要用到mmap()——内存映射

mmap用于文件或者其他对象映射进内存,通常用在有物理介质的文件系统上的,比如磁盘之类。

在Binder中,通过mmap将用户空间内的一块内存区域映射进内存空间,当映射关系建立完毕后,任何一方对内存区域的改动都会被反映到另一方。

Binder建立了一个虚拟设备/dev/binder,然后在内核空间创建了一块数据接收的缓冲区,这块数据接收缓冲区内核缓冲区接收数据的用户空间建立映射,减少了一次数据拷贝

实现原理

Binder流程

  1. Binder驱动内核空间建立一个数据接收缓存区
  2. 接着在内核空间开辟一块内核缓存区,建立起内核缓存区和内核数据接收缓存区之间的映射关系,以及数据接收缓存区和用户空间的映射关系
  3. 发送方进程通过系统调用copy_from_user将数据复制到内核缓存区,由于各自之间存在映射,等价于直接把数据传递到了接收进程

Binder优势

  • 性能

    Linux上的通信方式例如管道、Socket都需要复制两次数据。而Binder只要一次

    拷贝两次过程:发送方数据通过系统调用copy_from_user拷贝到内核缓存区,再由内核缓存区调用系统调用copy_to_user拷贝至接收方。

    Binder执行过程:在内核中建立数据接收缓存区,发送方数据通过系统调用copy_from_user拷贝到内核缓存区,此时内核缓存区已与数据接收缓存区接收进程数据缓存区建立映射,相当于发送方的数据直接到接收方。

  • 安全性

    传统的Linux通信是不包含通信双方的身份验证,Binder自带身份验证,提高了安全性。

    Binder权限验证

  • 稳定性

    Binder基于CS架构,Client的需求都交与Server去完成,职责明确。

Binder通信模型

Binder通信模型

Client

客户端进程

Client负责向Service Mananger查询所需Service,并且获得一个Binder代理对象,再通过Binder代理对象Server发起请求

Server

服务端进程

Server进程启动时,会通过Binder驱动注册自身的服务到Service Manager中,并且启动一个Binder线程池,用来接收Client的请求。

Service Manager

服务的管理者,指代的是Native层的ServiceManager,是整个 Binder通信机制的 大管家,也是Android进程间通信的守护进程。

主要有以下功能

  • Service通过Binder驱动ServiceManager注册Binder,表示可以对外提供服务。
  • Client通过Binder驱动ServiceManager获取Binder的引用。

Service Manager就是一个进程,内部维护了一张表,维护了名字+Binder实体的引用

启动

Service Manager进程是在开机时启动的

  1. init进程解析servicemanager.rc之后,找到对应可执行程序/system/bin/servicemanager
  2. 继续执行到service_manager.cmain()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
//frameworks/native/cmds/servicemanager/service_manager.c
int main(int argc, char** argv)
{
struct binder_state *bs;
union selinux_callback cb;
char *driver;

if (argc > 1) {
driver = argv[1];
} else {
driver = "/dev/binder";
}

bs = binder_open(driver, 128*1024);

if (binder_become_context_manager(bs)) {
ALOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}

#ifdef VENDORSERVICEMANAGER
sehandle = selinux_android_vendor_service_context_handle();
#else
sehandle = selinux_android_service_context_handle();
#endif
selinux_status_open(true);

binder_loop(bs, svcmgr_handler);

return 0;
}
binder_open()

打开设备驱动,位置为/dev/binder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
//frameworks/native/cmds/servicemanager/binder.c
struct binder_state *binder_open(const char* driver, size_t mapsize)
{
struct binder_state *bs;
struct binder_version vers;

bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}
//打开Binder设备驱动
bs->fd = open(driver, O_RDWR | O_CLOEXEC);

bs->mapsize = mapsize;
//进行内存映射
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);

return bs;

fail_map:
close(bs->fd);
fail_open:
free(bs);
return NULL;
}

bidner_open()总共执行了三步:

  1. open()打开/dev/binder设备节点,最终调用到内核中的binder驱动,同样执行到binder_open(),创建了binder_proc,再放入binder_procs
  2. 调用mmap()进行内存映射,映射大小为128k,主要在binder驱动创建Binder_buffer对象
  3. 返回binder_state,记录着如下变量:
    • fd:打开了/dev/binder的文件描述符
    • mapsize:内存映射大小
    • mapped:内存映射地址
binder_become_context_manager()

注册成为大管家。

1
2
3
4
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}

通过ioctlbinder驱动发出BINDER_SET_CONTEXT_MGR请求,成为上下文的管理者。

向Binder驱动注册,它的handle句柄固定为0.这个binder的引用固定为0。

一个Server若要向Service Manager注册自己的Binder就必需通过0这个引用号和Service Manager的Binder通信。所有需要注册自己的Server对于Service Manager来说都是Client

binder_loop()

不断循环,等待客户请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
//frameworks/native/cmds/servicemanager/binder.c
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];

bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;

readbuf[0] = BC_ENTER_LOOPER;
//向Binder驱动发送 BC_ENTER_LOOPER 协议,让Service Manager进入循环状态
binder_write(bs, readbuf, sizeof(uint32_t));

for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
//读取Binder的数据 就会写到 readbuf中,此时就可以进行解析操作
bwr.read_buffer = (uintptr_t) readbuf;
//使Service Manager进入内核态
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
//等待解析Client的请,收到消息切换用户态
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
}
}

Service Manager通过binder_write()binder驱动发送BC_ENTER_LOOPER协议,然后Service Manager进入循环状态。开启for循环,接着通过ioctl()发送BINDER_WRITE_READ请求到Binder驱动,使Service Manager进入内核态,开始等待Client发起请求。未收到请求时,处于等待状态。收到请求后调用binder_parse()解析接收到的请求并切换到用户态

BINDER_WRITE_READ:向Binder驱动进行读取或写入操作,参数分为两部分write_sizeread_size

  • write_size不为空,取出write_buffer的数据写入到Binder里。
  • read_size不为空,Binder写数据到read_bufferread_buffer没有数据,就处于等待状态。

binder_write()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
int binder_write(struct binder_state *bs, void *data, size_t len)
{
struct binder_write_read bwr;
int res;

bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
fprintf(stderr,"binder_write: ioctl failed (%s)\n",
strerror(errno));
}
return res;
}

binder_write()实质调用ioctl(),负责的是向Binder驱动写入数据。除了BC_ENTER_LOOP,还有其他类型的命令(以BC_开头)

BC可以理解为向Binder写入数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
enum binder_driver_command_protocol {
BC_TRANSACTION = _IOW('c', 0, struct binder_transaction_data), //Client向Server 发送请求数据,向Binder写入请求数据
BC_REPLY = _IOW('c', 1, struct binder_transaction_data), //Server 向Client 返回数据,向Binder写入回复数据
BC_ACQUIRE_RESULT = _IOW('c', 2, __s32),
BC_FREE_BUFFER = _IOW('c', 3, binder_uintptr_t),//释放一块mmap映射的内存
BC_INCREFS = _IOW('c', 4, __u32),
BC_ACQUIRE = _IOW('c', 5, __u32),
BC_RELEASE = _IOW('c', 6, __u32),
BC_DECREFS = _IOW('c', 7, __u32),
BC_INCREFS_DONE = _IOW('c', 8, struct binder_ptr_cookie),
BC_ACQUIRE_DONE = _IOW('c', 9, struct binder_ptr_cookie),
BC_ATTEMPT_ACQUIRE = _IOW('c', 10, struct binder_pri_desc),
BC_REGISTER_LOOPER = _IO('c', 11),
BC_ENTER_LOOPER = _IO('c', 12),
BC_EXIT_LOOPER = _IO('c', 13),
BC_REQUEST_DEATH_NOTIFICATION = _IOW('c', 14, struct binder_handle_cookie),
BC_CLEAR_DEATH_NOTIFICATION = _IOW('c', 15, struct binder_handle_cookie),
BC_DEAD_BINDER_DONE = _IOW('c', 16, binder_uintptr_t),
BC_TRANSACTION_SG = _IOW('c', 17, struct binder_transaction_data_sg),
BC_REPLY_SG = _IOW('c', 18, struct binder_transaction_data_sg),
};

binder_parse()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)

{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;

while (ptr < end) {
switch(cmd) {
...
case BR_TRANSACTION: {
//当前是Binder驱动向 ServiceManager 发送请求数据,例如获取服务、注册服务
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: txn too small!\n");
return -1;
}
binder_dump_txn(txn);
if (func) { //func 指的就是 ServiceManager初始化的时候 传进来的 svcmgr_handler 函数
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;

bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
res = func(bs, txn, &msg, &reply);
if (txn->flags & TF_ONE_WAY) {
binder_free_buffer(bs, txn->data.ptr.buffer);
} else {
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);//返回应答数据
}
}
ptr += sizeof(*txn);
break;
}
case BR_REPLY: {
//当前是Binder驱动 向 ServiceManager发送 回复数据
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: reply too small!\n");
return -1;
}
binder_dump_txn(txn);
if (bio) {
bio_init_from_txn(bio, txn);
bio = 0;
} else {
/* todo FREE BUFFER */
}
ptr += sizeof(*txn);
r = 0;
break;
}
case BR_DEAD_BINDER: {
struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;
ptr += sizeof(binder_uintptr_t);
death->func(bs, death->ptr);
break;
}
...
}

return r;
}

binder_parse()负责解析read_buffer读取到的数据。

binder_parse()主要就是解析BR_开头的指令,上面重要的就是BR_TRANSACTIONBR_REPLY。除此之外还有其他的BR_指令

BR可以理解为从Binder读取数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
enum binder_driver_return_protocol {
BR_ERROR = _IOR('r', 0, __s32),//发生内部错误
BR_OK = _IO('r', 1), //操作完成
BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data), //读取请求数据,从Binder读取请求数据 然后发送到Server
BR_REPLY = _IOR('r', 3, struct binder_transaction_data), //读取回复数据,从Binder读取回复数据 然后发送到Client
BR_ACQUIRE_RESULT = _IOR('r', 4, __s32),
BR_DEAD_REPLY = _IO('r', 5), //对方进程或线程已死
BR_TRANSACTION_COMPLETE = _IO('r', 6),
BR_INCREFS = _IOR('r', 7, struct binder_ptr_cookie),
BR_ACQUIRE = _IOR('r', 8, struct binder_ptr_cookie),
BR_RELEASE = _IOR('r', 9, struct binder_ptr_cookie),
BR_DECREFS = _IOR('r', 10, struct binder_ptr_cookie),
BR_ATTEMPT_ACQUIRE = _IOR('r', 11, struct binder_pri_ptr_cookie),
BR_NOOP = _IO('r', 12),
BR_SPAWN_LOOPER = _IO('r', 13),
BR_FINISHED = _IO('r', 14),
BR_DEAD_BINDER = _IOR('r', 15, binder_uintptr_t),//向持有Binder引用的进程通知Binder已死
BR_CLEAR_DEATH_NOTIFICATION_DONE = _IOR('r', 16, binder_uintptr_t),
BR_FAILED_REPLY = _IO('r', 17),
};

其中BR_TRANSACTION需要进行特殊处理,实现对外提供服务功能。例如提供获取服务、注册服务功能后面会简单的讲解

binder_parse()收到BR_TRANSACTION之后,就会执行到svcmgr_handler()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
//frameworks/native/cmds/servicemanager/service_manager.c
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)

{
struct svcinfo *si;//记录着 服务信息
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
...
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE://获取服务 一般是Client发起获取服务请求
s = bio_get_string16(msg, &len);//服务名
if (s == NULL) {
return -1;
}
handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);//获取对应服务
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;

case SVC_MGR_ADD_SERVICE://添加服务 一般是Server发起注册服务请求
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
dumpsys_priority = bio_get_uint32(msg);
if (do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, dumpsys_priority,
txn->sender_pid))//注册服务
return -1;
break;

case SVC_MGR_LIST_SERVICES: { //列举所有注册的服务
uint32_t n = bio_get_uint32(msg);
uint32_t req_dumpsys_priority = bio_get_uint32(msg);

if (!svc_can_list(txn->sender_pid, txn->sender_euid)) {
ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
txn->sender_euid);
return -1;
}
si = svclist;
// walk through the list of services n times skipping services that
// do not support the requested priority
while (si) {
if (si->dumpsys_priority & req_dumpsys_priority) {
if (n == 0) break;
n--;
}
si = si->next;
}
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
ALOGE("unknown code %d\n", txn->code);
return -1;
}

bio_put_uint32(reply, 0);
return 0;
}

svcmgr_handler()主要提供服务相关的功能,根据不同的code有对应的功能:

  • SVG_MGR_GET_SERVICESVC_MGR_CHECK_SERVICE:获取服务
  • SVC_MGR_ADD_SERVICE:注册服务
  • SVC_MGR_LIST_SERVICES:列举所有服务

Service Manager存储的是一个svclist的一个链表结构,里面存储的对象为svcinfo

1
2
3
4
5
6
7
8
9
10
struct svcinfo
{

struct svcinfo *next; //下一个注册服务
uint32_t handle; //服务的 句柄
struct binder_death death;
int allow_isolated;
uint32_t dumpsys_priority;
size_t len;
uint16_t name[0]; //服务名
};

ServiceManager启动过程

总结

ServiceManager启动过程主要执行以下几步:

  1. binder_open():打开驱动,/dev/binder
  2. binder_become_context_manager():成为管家,并准备进入循环
  3. binder_loop():开启循环,等待新消息并处理

ServiceManager初始化

获取Service Manager代理对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//frameworks/native/libs/binder/IServiceManager.cpp
[[clang::no_destroy]] static sp<IServiceManager> gDefaultServiceManager;

sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
if (gDefaultServiceManager == NULL)
sleep(1);
}
}

return gDefaultServiceManager;
}

gDefaultServiceManager的创建过程主要分为以下几步:

ProcessState::self()

用于创建ProcessState对象,每个进程有且只有一个

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
//frameworks/native/libs/binder/ProcessState.cpp
#define DEFAULT_BINDER_VM_SIZE ((1 * 1024 * 1024) - sysconf(_SC_PAGE_SIZE) * 2)
#define DEFAULT_MAX_BINDER_THREADS 15

sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != NULL) {//采用单例模式,保证只有一个
return gProcess;
}
gProcess = new ProcessState(DEFAULT_BINDER_VM_SIZE);//实例化ProcessState
return gProcess;
}

ProcessState::ProcessState(const char *driver)
: mDriverName(String8(driver))
, mDriverFD(open_driver(driver)) //1⃣️ 打开binder驱动
, mVMStart(MAP_FAILED)
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mStarvationStartTimeMs(0)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
, mCallRestriction(CallRestriction::NONE)
{

if (mDriverFD >= 0) {
//3⃣️ 通过mmap 在 binder驱动映射一块内存,用来接收事务
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str());
close(mDriverFD);
mDriverFD = -1;
mDriverName.clear();
}
}
}

static int open_driver(const char *driver)
{
int fd = open(driver, O_RDWR | O_CLOEXEC);
if (fd >= 0) {
int vers = 0;
//获取Binder驱动版本
status_t result = ioctl(fd, BINDER_VERSION, &vers);
if (result == -1) {
ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
close(fd);
fd = -1;
}

size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
//2⃣️ 通过 ioctl为 binder驱动设置 最大线程数,默认为15
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
} else {
ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno));
}
return fd;
}

ProcessState可以保证每个进程打开binder设备一次,通过mDriverFd记录binder驱动的fd,可以用于后续访问Binder设备。

ProcessState的初始化过程主要执行了以下几步:

  1. 1⃣️open_driver():打开binder驱动设备,并且验证binder驱动版本是否一致。
  2. 2⃣️ioctl():为binder驱动设置最大线程数,默认为15。加上主binder线程,所以最多为16个。
  3. 3⃣️mmap():在binder驱动中分配一块1016KB大小的空间,用于处理事务。
getContextObject()

主要为了获取BpBinder对象

1
2
3
4
5
6
7
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
//打开handle 为 0 的IBinder对象
sp<IBinder> context = getStrongProxyForHandle(0);

return context;
}

获取handle==0的IBinder对象,实际就是ServiceManagerBpBinder对象。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;

AutoMutex _l(mLock);
//根据 handle 查找对应的 handle_entry
handle_entry* e = lookupHandleLocked(handle);

if (e != nullptr) {
IBinder* b = e->binder;
if (b == nullptr || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {

IPCThreadState* ipc = IPCThreadState::self();

CallRestriction originalCallRestriction = ipc->getCallRestriction();
ipc->setCallRestriction(CallRestriction::NONE);

Parcel data;
//验证binder是否就绪
status_t status = ipc->transact(
0, IBinder::PING_TRANSACTION, data, nullptr, 0);

ipc->setCallRestriction(originalCallRestriction);

if (status == DEAD_OBJECT)
return nullptr;
}
//handle值对应的 IBinder不存在或无效时,新建一个 BpBinder对象
b = BpBinder::create(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}

return result;
}

getContextObj()主要执行了以下几步:

  1. getStrongProxyforHandle():获取handle==0的IBinder对象
  2. IPCThreadState::self()->transact():向Binder驱动传递对象,判断Binder驱动是否就绪
  3. BpBinder::create():创建ServiceManagerBpBinder对象
interface_cast()

创建BpServiceManager对象

1
2
3
4
5
6
// IInterface.h
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
//等价于 IServiceManager::asInterface
}

interface_cast()是一个模板函数,经过操作后最后得到

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
const android::String16 IServiceManager::descriptor(“android.os.IServiceManager”);

const android::String16& IServiceManager::getInterfaceDescriptor() const
{
return IServiceManager::descriptor;
}

android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)
{
android::sp<IServiceManager> intr;
if(obj != NULL) {
intr = static_cast<IServiceManager *>(
obj->queryLocalInterface(IServiceManager::descriptor).get());
if (intr == NULL) {
intr = new BpServiceManager(obj); //创建BpServiceManager对象
}
}
return intr;
}

此时初始化BpServiceManager对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
    explicit BpServiceManager(const sp<IBinder>& impl)
: BpInterface<IServiceManager>(impl)
{
}

// IInterface.h
inline BpRefBase<IServiceManager>::BpInterface(const sp<IBinder>& remote)
:BpRefBase(remote)
{ }

// Binder.cpp
BpRefBase::BpRefBase(const sp<IBinder>& o)
: mRemote(o.get()), mRefs(NULL), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
if (mRemote) {
mRemote->incStrong(this);
mRefs = mRemote->createWeak(this);
}
}

BpServiceManager初始化过程中,依次调用BpRefBaseBpRefBaseBpServiceManager的构造函数,赋予BpRefBase的mRemote的值为BpBinder(0)。

最后可知defaultServiceManager 等价于 new BpServiceManager(new BpBinder(0))

总结

获取ServiceManager代理

  • open: 创建binder_proc
  • BINDER_SET_MAX_THREADS: 设置proc->max_threads
  • mmap: 创建创建binder_buffer
javaObjectForIBinder()

主要为了获取BinderProxy对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
//frameworks/base/core/jni/android_util_Binder.cpp
//负责创建一个 BinderProxy对象
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{

BinderProxyNativeData* nativeData = new BinderProxyNativeData();
nativeData->mOrgue = new DeathRecipientList;
nativeData->mObject = val;

jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
if (env->ExceptionCheck()) {
// In the exception case, getInstance still took ownership of nativeData.
return NULL;
}
BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
if (actualNativeData == nativeData) {
// Created a new Proxy
uint32_t numProxies = gNumProxies.fetch_add(1, std::memory_order_relaxed);
uint32_t numLastWarned = gProxiesWarned.load(std::memory_order_relaxed);
if (numProxies >= numLastWarned + PROXY_WARN_INTERVAL) {
// Multiple threads can get here, make sure only one of them gets to
// update the warn counter.
if (gProxiesWarned.compare_exchange_strong(numLastWarned,
numLastWarned + PROXY_WARN_INTERVAL, std::memory_order_relaxed)) {
ALOGW("Unexpectedly many live BinderProxies: %d\n", numProxies);
}
}
} else {
delete nativeData;
}

return object;
}

执行完成后BinderInternal.getContextObject()得到BinderProxy

继续调用到ServiceManagerNative.asInterface()

1
2
3
4
5
6
7
8
9
//frameworks/base/core/java/android/os/ServiceManagerNative.java
public static IServiceManager asInterface(IBinder obj) {
if (obj == null) {
return null;
}

// ServiceManager is never local
return new ServiceManagerProxy(obj);
}

等价于最后生成的代理对象就是ServiceManagerProxy

Service Manager 注册服务

Service 向 Service Manager 注册服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
//ServiceManager.java
public static void addService(String name, IBinder service, boolean allowIsolated,
int dumpPriority)
{
try {
getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}

private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}

// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}

sServiceManager最后得到的就是上节中的BpServiceManager对象

1
2
3
4
5
6
7
8
9
10
11
12
13
//frameworks/native/libs/binder/IServiceManager.cpp
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated, int dumpsysPriority)
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);//服务的name
data.writeStrongBinder(service);//具体 服务
data.writeInt32(allowIsolated ? 1 : 0);
data.writeInt32(dumpsysPriority);
//remote 表示 BpBinder对象
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}

addService()具体就是向Service Manager注册服务,将相关数据封装到Parcel对象。

接下来通过BpBinder调用transact()传输数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//frameworks/native/libs/binder/BpBinder.cpp
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)

{
// Once a binder has died, it will never come back to life.
if (mAlive) {
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}

return DEAD_OBJECT;
}
IPCThreadState->transact

初始化IPCThreadState之后,向Binder驱动发送数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
//frameworks/native/libs/binder/IPCThreadState.cpp
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;//初始IPCThreadState
}

pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
int key_create_value = pthread_key_create(&gTLS, threadDestructor);
if (key_create_value != 0) {
pthread_mutex_unlock(&gTLSMutex);
ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
strerror(key_create_value));
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}

IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}

void IPCThreadState::clearCaller()
{
mCallingPid = getpid(); //初始化PID
mCallingUid = getuid(); //初始化UID
}

每个线程都有一个IPCThreadState,内部包含如下参数:

  • mIn:接收来自Binder设备的数据
  • mOut:存储发送Binder设备的数据
  • mProcess:当前进程的ProcessState
  • mCallingPid:当前进程的Pid
  • mCallingUid:当前进程的Uid

接下来执行transact()传输数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)

{
status_t err;

flags |= TF_ACCEPT_FDS;
//传输数据
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);

if ((flags & TF_ONE_WAY) == 0) {

if (reply) {
//等待响应
err = waitForResponse(reply);
} else {
//直接返回null
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}

IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
<< handle << ": ";
if (reply) alog << indent << *reply << dedent << endl;
else alog << "(none requested)" << endl;
}
} else {
//直接返回null
err = waitForResponse(NULL, NULL);
}

return err;
}

transact()主要过程:

  1. 执行writeTransactionData()Parcel中的mOut写入数据

    写入的数据主要是BC_TRANSACTION协议以及binder_transaction_data数据。

  2. 执行waitForResponse()循环执行,等待应答消息。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    {
    int32_t cmd;
    int32_t err;
    while (1) {
    if ((err=talkWithDriver()) < NO_ERROR) break;
    ...
    if (mIn.dataAvail() == 0) continue;

    cmd = mIn.readInt32();
    switch (cmd) {
    case BR_TRANSACTION_COMPLETE:
    //如果是 oneway 的请求方式,直接结束即可
    if (!reply && !acquireResult) goto finish;
    break;
    case BR_DEAD_REPLY: ...
    case BR_FAILED_REPLY: ...
    case BR_ACQUIRE_RESULT: ...
    case BR_REPLY: ...
    //完整的执行一次通信过程
    goto finish;

    default:
    err = executeCommand(cmd);
    if (err != NO_ERROR) goto finish;
    break;
    }
    }
    ...
    return err;
    }
IPCThreadState.talkWithDrive()

负责与 Binder驱动进行通信

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
binder_write_read bwr;
//当mDataSize <= mDataPos,则有数据可读
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data(); // mData地址

if (doReceive && needRead) {
//接收数据缓冲区信息的填充。如果以后收到数据,就直接填在mIn中。
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
//当读缓冲和写缓冲都为空,则直接返回
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//通过ioctl不停的读写操作,跟Binder Driver进行通信
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
...
} while (err == -EINTR);
...
return err;
}
binder_ioctl()

Binder驱动进行通信

  • binder_ioctl()过程解析ioctl参数BINDER_WRITE_READ,则调用binder_ioctl_write_read()方法;
  • binder_ioctl_write_read()过程将用户空间binder_write_read结构体拷贝到内核空间, 写缓存中存在数据,则调用binder_thread_write()方法;
  • binder_thread_write()过程解析到传输协议为BC_TRANSACTION,则调用binder_transaction()方法;
  • binder_transaction()过程将用户空间binder_transaction_data结构体拷贝到内核空间,内核创建一个binder_transaction结构体,

服务注册过程

binder_parse()

解析Binder驱动返回的数据

前面讲到Service Manager的启动时,就介绍到在binder_loop()中负责接收消息,收到消息后通过binder_parse进行解析。

收到的指令为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: txn too small!\n");
return -1;
}
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;

bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
res = func(bs, txn, &msg, &reply);
if (txn->flags & TF_ONE_WAY) {
binder_free_buffer(bs, txn->data.ptr.buffer);
} else {
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
}
ptr += sizeof(*txn);
break;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)

{
...
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
dumpsys_priority = bio_get_uint32(msg);
if (do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, dumpsys_priority,
txn->sender_pid))
return -1;
break;


}
do_add_service()

向Service Manager添加服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
int do_add_service(struct binder_state *bs, const uint16_t *s, size_t len, uint32_t handle,
uid_t uid, int allow_isolated, uint32_t dumpsys_priority, pid_t spid)
{
struct svcinfo *si;

//ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,
// allow_isolated ? "allow_isolated" : "!allow_isolated", uid);

if (!handle || (len == 0) || (len > 127))
return -1;

if (!svc_can_register(s, len, spid, uid)) {
ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
str8(s, len), handle, uid);
return -1;
}

si = find_svc(s, len);
if (si) {
if (si->handle) {
ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
str8(s, len), handle, uid);
svcinfo_death(bs, si);
}
si->handle = handle;
} else {
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
str8(s, len), handle, uid);
return -1;
}
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->dumpsys_priority = dumpsys_priority;
si->next = svclist;//svcList保存服务
svclist = si;
}

binder_acquire(bs, handle);
binder_link_to_death(bs, handle, &si->death);
return 0;
}

Java 注册服务

上面讲的都是Native层的相关过程,接下来简单分析下Java层 如何注册服务

一般都是通过ServiceManager.addService()去在Service Manager注册服务,这一类方式主要面向的是系统服务

系统服务相关的分析完毕后,会简单介绍开发者自定义Service的注册过程。

注册服务的操作都是由Server执行的,所以下面的流程基本都是在Server端操作的

系统服务(SystemServer) 注册服务

系统服务:一般指的是由SystemServer进程启动的服务,例如InputManagerServiceWindowManagerService

1
2
3
4
//SystemServer.java
inputManager = new InputManagerService(context);
ServiceManager.addService(Context.INPUT_SERVICE, inputManager,
/* allowIsolated= */ false, DUMP_FLAG_PRIORITY_CRITICAL);

都是通过ServiceManager.addService()进行注册

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
//core/java/android/os/ServiceManager.java
public static void addService(String name, IBinder service, boolean allowIsolated,
int dumpPriority)
{
try {
getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}

private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}

// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}

static public IServiceManager asInterface(IBinder obj)
{
//obj 为 BinderProxy对象
if (obj == null) {
return null;
}
IServiceManager in =
(IServiceManager)obj.queryLocalInterface(descriptor);
if (in != null) {
return in;
}

return new ServiceManagerProxy(obj);
}

前面有讲到具体的处理过程,这边直接贴一个结论:

sServiceManager最后得到的是ServiceManagerProxy对象,且IBinder对象为BinderProxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
//core/java/android/os/ServiceManagerNative.java
class ServiceManagerProxy implements IServiceManager {
public ServiceManagerProxy(IBinder remote) {
mRemote = remote;
}

public void addService(String name, IBinder service, boolean allowIsolated, int dumpPriority)
throws RemoteException
{
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
data.writeStrongBinder(service);
data.writeInt(allowIsolated ? 1 : 0);
data.writeInt(dumpPriority);
//mRemote为 BinderProxy对象
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
reply.recycle();
data.recycle();
}
writeStrongBinder()

Binder实体写入Parcel中,就可以传递到对端。

1
2
3
public final void writeStrongBinder(IBinder val) {
nativeWriteStrongBinder(mNativePtr, val);
}
1
2
3
4
5
6
7
8
9
10
11
//core/jni/android_os_Parcel.cpp
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
if (err != NO_ERROR) {
signalExceptionForError(env, clazz, err);
}
}
}

ibinderForJavaObject()

Binder(Java)转化成Binder(c++)对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
//core/jni/android_util_Binder.cpp
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
if (obj == NULL) return NULL;

// Java层的Binder对象
if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
JavaBBinderHolder* jbh = (JavaBBinderHolder*)
env->GetLongField(obj, gBinderOffsets.mObject);
return jbh->get(env, obj);
}

// Java层的BinderProxy对象
if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
return getBPNativeData(env, obj)->mObject;
}

ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
return NULL;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
sp<JavaBBinder> get(JNIEnv* env, jobject obj)
{
AutoMutex _l(mLock);
sp<JavaBBinder> b = mBinder.promote();
if (b == NULL) {
b = new JavaBBinder(env, obj);
mBinder = b;
ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%" PRId32 "\n",
b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount());
}

return b;
}

iBinderForJavaObject()最后转换出一个JavaBBinder对象

Java-Binder与Native-Binder

parcel->writeStrongBinder()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
//frameworks/native/libs/binder/Parcel.cpp
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}

status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)

{
flat_binder_object obj;

if (IPCThreadState::self()->backgroundSchedulingDisabled()) {
/* minimum priority for all nodes is nice 0 */
obj.flags = FLAT_BINDER_FLAG_ACCEPTS_FDS;
} else {
/* minimum priority for all nodes is MAX_NICE(19) */
obj.flags = 0x13 | FLAT_BINDER_FLAG_ACCEPTS_FDS;
}

if (binder != NULL) {
IBinder *local = binder->localBinder();//本地bidner对象
if (!local) {
BpBinder *proxy = binder->remoteBinder();//远程Binder对象
if (proxy == NULL) {
ALOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.hdr.type = BINDER_TYPE_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else {
obj.hdr.type = BINDER_TYPE_BINDER;
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
obj.hdr.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
}

//写入 flat_binder_object到 out
return finish_flatten_binder(binder, obj, out);
}

writeStrongBinder()负责转换IBinder对象到flat_bidner_object

mRemote.transact(ADD_SERVICE_TRANSACTION)

通过BinderProxy传输Binder对象Binder驱动

1
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
//core/java/android/os/Binder.java
final class BinderProxy implements IBinder {
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
//检测 data 的数据是否大于 800k
Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");


try {
//通过Native层 向 Binder驱动传递消息
return transactNative(code, data, reply, flags);
} finally {
if (tracingEnabled) {
Trace.traceEnd(Trace.TRACE_TAG_ALWAYS);
}
}
}

}

public native boolean transactNative(int code, Parcel data, Parcel reply,
int flags)
throws RemoteException
;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
//core/jni/android_util_Binder.cpp
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
jint code, jobject dataObj, jobject replyObj, jint flags)
// throws RemoteException
{

//解析传递数据
Parcel* data = parcelForJavaObject(env, dataObj);
//解析返回数据
Parcel* reply = parcelForJavaObject(env, replyObj);

//target 为 BpBinder
IBinder* target = getBPNativeData(env, obj)->mObject.get();

//printf("Transact from Java code to %p sending: ", target); data->print();
//向 Binder驱动发送数据
status_t err = target->transact(code, *data, reply, flags);
//if (reply) printf("Transact from Java code to %p received: ", target); reply->print();

signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
return JNI_FALSE;
}


BinderProxyNativeData* getBPNativeData(JNIEnv* env, jobject obj) {
return (BinderProxyNativeData *) env->GetLongField(obj, gBinderProxyOffsets.mNativeData);
}

struct BinderProxyNativeData {
// Both fields are constant and not null once javaObjectForIBinder returns this as
// part of a BinderProxy.

// The native IBinder proxied by this BinderProxy.
sp<IBinder> mObject;

// Death recipients for mObject. Reference counted only because DeathRecipients
// hold a weak reference that can be temporarily promoted.
sp<DeathRecipientList> mOrgue; // Death recipients for mObject.
};

继续执行的就是IPCThreadState->transact

总结

ServiceManager.addService()主要执行了以下几步:

  1. Parcel.obtain():构建Native层的Parcel对象
  2. parcel.writeStrongBinder():构造JavaBBinder对象写入到falt_binder_object,准备传到Binder驱动
  3. BpBinder.transact(ADD_SERVICE_TRANSACTION):通过IPCThreadState.talkWithDriver()发送数据到Binder驱动
自定义服务(CustomServer) 注册服务

ServiceManager.addService()主要面向的是系统服务,应用自定义服务是无法通过这种方式注册的。

1
2
3
4
5
6
7
8
9
10
11
12
//frameworks/native/cmds/servicemanager/service_manager.c

static int svc_can_register(const uint16_t *name, size_t name_len, pid_t spid, uid_t uid)
{
const char *perm = "add";

if (multiuser_get_app_id(uid) >= AID_APP) { // AID_APP 为 10000
return 0; /* Don't allow apps to register services */
}

return check_mac_perms_from_lookup(spid, uid, perm, str8(name, name_len)) ? 1 : 0;
}

自定义Service被分配到的uid都是大于10000的,当自定义Service执行到的时候,会在这一步被拒绝。

一般情况下通过startService()启动服务,bindService()来绑定服务并与其他Service进行交互。

Service Manager 获取服务

Client 向 Service Manager 获取服务

获取服务流程大致与注册服务流程一致

只是最后执行的do_find_service()方法,从Service Manager获取注册的服务。

查询服务

Java 获取服务

一般通过ServiceManager.getService()获取服务。

由于ServiceManager无法被直接调用,就需要通过底层进行调用。

Context#getSystemService

最常用的就是getSystemService

1
2
3
4
5
6
7
8
//Context.java
public abstract @Nullable Object getSystemService(@ServiceName @NonNull String name);

//ContextImpl.java Context实现类
@Override
public Object getSystemService(String name) {
return SystemServiceRegistry.getSystemService(this, name);
}

接下来就切换到SystemServiceRegistry执行SystemService相关流程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
//base/core/java/android/app/SystemServiceRegistry.java
//缓存 SystemServerName 与 ServiceFetcher映射关系
private static final HashMap<String, ServiceFetcher<?>> SYSTEM_SERVICE_FETCHERS =
new HashMap<String, ServiceFetcher<?>>();

static abstract interface ServiceFetcher<T> {
T getService(ContextImpl ctx);
}

static {
...
registerService(Context.LAYOUT_INFLATER_SERVICE, LayoutInflater.class,
new CachedServiceFetcher<LayoutInflater>()
{
@Override
public LayoutInflater createService(ContextImpl ctx) {
return new PhoneLayoutInflater(ctx.getOuterContext());
}});
...
registerService(Context.CONNECTIVITY_SERVICE, ConnectivityManager.class,
new StaticApplicationContextServiceFetcher<ConnectivityManager>()
{
@Override
public ConnectivityManager createService(Context context) throws ServiceNotFoundException {
IBinder b = ServiceManager.getServiceOrThrow(Context.CONNECTIVITY_SERVICE);
IConnectivityManager service = IConnectivityManager.Stub.asInterface(b);
return new ConnectivityManager(context, service);
}});
...
registerService(Context.WIFI_P2P_SERVICE, WifiP2pManager.class,
new StaticServiceFetcher<WifiP2pManager>()
{
@Override
public WifiP2pManager createService() throws ServiceNotFoundException {
IBinder b = ServiceManager.getServiceOrThrow(Context.WIFI_P2P_SERVICE);
IWifiP2pManager service = IWifiP2pManager.Stub.asInterface(b);
return new WifiP2pManager(service);
}});
...

}
//注册系统服务 在SYSTEM_SERVICE_FETCHERS 添加
private static <T> void registerService(String serviceName, Class<T> serviceClass,
ServiceFetcher<T> serviceFetcher)
{
SYSTEM_SERVICE_NAMES.put(serviceClass, serviceName);
SYSTEM_SERVICE_FETCHERS.put(serviceName, serviceFetcher);
}

public static Object getSystemService(ContextImpl ctx, String name) {
ServiceFetcher<?> fetcher = SYSTEM_SERVICE_FETCHERS.get(name);
return fetcher != null ? fetcher.getService(ctx) : null;
}

根据上述源码得知ServiceFetcher主要有以下3种实现类

  • CachedServiceFetcher:进程内部缓存SystemService,切换进程需要重新获取
  • StaticServiceFetcher:系统内部缓存,所有进程获取的都是同一个SystemService
  • StaticApplicationContextServiceFetcher:应用内部缓存SystemService,其他应用需要重新获取。

ServiceFetcher的实现类中,需要实现createService(),其中内部调用到了ServiceManager.getServiceOrThrow()

ServiceManager#getService()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
//core/java/android/os/ServiceManager.java
public static IBinder getServiceOrThrow(String name) throws ServiceNotFoundException {
final IBinder binder = getService(name);
if (binder != null) {
return binder;
} else {
throw new ServiceNotFoundException(name);
}
}

public static IBinder getService(String name) {
try {
//缓存直接获取
IBinder service = sCache.get(name);
if (service != null) {
return service;
} else {

return Binder.allowBlocking(rawGetService(name));
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}

private static IBinder rawGetService(String name) throws RemoteException {
final long start = sStatLogger.getTime();

final IBinder binder = getIServiceManager().getService(name);
...
}

getIServiceManager()最后指向的就是ServiceManagerProxy

1
2
3
4
5
6
7
8
9
10
11
12
13
//core/java/android/os/ServiceManagerNative.java
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
//读取Binder对象
IBinder binder = reply.readStrongBinder();
reply.recycle();
data.recycle();
return binder;
}
mRemote.transact(GET_SERVICE_TRANSACTION)

通过IPCThreadState.talkWithDriver()发送数据GET_SERVICE_TRANSACTIONBinder驱动

readStrongBinder()

基本就是writeStrongBinder()的逆向过程

1
2
3
4
5
6
7
8
static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
{
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
return javaObjectForIBinder(env, parcel->readStrongBinder());
}
return NULL;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
//frameworks/native/libs/binder/Parcel.cpp
sp<IBinder> Parcel::readStrongBinder() const
{
sp<IBinder> val;
// Note that a lot of code in Android reads binders by hand with this
// method, and that code has historically been ok with getting nullptr
// back (while ignoring error codes).
readNullableStrongBinder(&val);
return val;
}

status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
{
return unflattenBinder(val);
}

status_t Parcel::unflattenBinder(sp<IBinder>* out) const
{
const flat_binder_object* flat = readObject(false);

if (flat) {
switch (flat->hdr.type) {
case BINDER_TYPE_BINDER: {
sp<IBinder> binder = reinterpret_cast<IBinder*>(flat->cookie);
return finishUnflattenBinder(binder, out);
}
case BINDER_TYPE_HANDLE: {
sp<IBinder> binder =
ProcessState::self()->getStrongProxyForHandle(flat->handle);
return finishUnflattenBinder(binder, out);
}
}
}
return BAD_TYPE;
}

flat_binder_obj读取得到IBinder对象,实质就是BpBinder对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
//core/jni/android_util_Binder.cpp
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
if (val == NULL) return NULL;

if (val->checkSubclass(&gBinderOffsets)) {
// It's a JavaBBinder created by ibinderForJavaObject. Already has Java object.
jobject object = static_cast<JavaBBinder*>(val.get())->object();
LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
return object;
}

// For the rest of the function we will hold this lock, to serialize
// looking/creation/destruction of Java proxies for native Binder proxies.
AutoMutex _l(gProxyLock);

BinderProxyNativeData* nativeData = gNativeDataCache;
if (nativeData == nullptr) {
nativeData = new BinderProxyNativeData();
}
// gNativeDataCache is now logically empty.
jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
if (env->ExceptionCheck()) {
// In the exception case, getInstance still took ownership of nativeData.
gNativeDataCache = nullptr;
return NULL;
}
BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
if (actualNativeData == nativeData) {
// New BinderProxy; we still have exclusive access.
nativeData->mOrgue = new DeathRecipientList;
nativeData->mObject = val;
gNativeDataCache = nullptr;
++gNumProxies;
if (gNumProxies >= gProxiesWarned + PROXY_WARN_INTERVAL) {
ALOGW("Unexpectedly many live BinderProxies: %d\n", gNumProxies);
gProxiesWarned = gNumProxies;
}
} else {
// nativeData wasn't used. Reuse it the next time.
gNativeDataCache = nativeData;
}

return object;
}

经过javaObjectForIBinder()之后转换BpBinder对象到BinderProxy对象.

通过ServiceManager.getService()最后Client持有的就是BinderProxy对象。

Binder驱动

下面源码分析基于android-goldfish-4.4-dev内核版本

Android专用,主要以misc设备进行注册,节点是/dev/binder,直接操作设备内存。

Binder驱动源码位于内核,具体代码路径位于/drivers/android/binder.c

加载过程

Binder初始化-binder_init()

注册misc设备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
//drivers/android/binder.c
static int __init binder_init(void){
int ret;
char * device_name, * device_names;
struct binder_device * device;
struct hlist_node * tmp;

binder_alloc_shrinker_init();

atomic_set(& binder_transaction_log.cur, ~0U);
atomic_set(& binder_transaction_log_failed.cur, ~0U);
//构建工作队列
binder_deferred_workqueue = create_singlethread_workqueue("binder");
if (!binder_deferred_workqueue)
return -ENOMEM;

while ((device_name = strsep(& device_names, ","))) {
//注册bidner设备
ret = init_binder_device(device_name);
if (ret)
goto err_init_binder_device_failed;
}

return ret;
}
//执行过程异常处理
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
static int __init init_binder_device(const char *name)
{
int ret;
struct binder_device *binder_device;

binder_device->miscdev.fops = &binder_fops;
binder_device->miscdev.minor = MISC_DYNAMIC_MINOR;
binder_device->miscdev.name = name;

binder_device->context.binder_context_mgr_uid = INVALID_UID;
binder_device->context.name = name;
mutex_init(&binder_device->context.context_mgr_node_lock);

//注册misc设备
ret = misc_register(&binder_device->miscdev);

hlist_add_head(&binder_device->hlist, &binder_devices);

return ret;
}

通过misc_register注册Binder设备,具体配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
miscdev.fops = &binder_fops;
miscdev.minor = MISC_DYNAMIC_MINOR;
miscdev.name = name //name为以下三个 binder hwbinder cdvbinder

static const struct file_operations binder_fops = {
.owner = THIS_MODULE,
.poll = binder_poll,
.unlocked_ioctl = binder_ioctl,
.compat_ioctl = binder_ioctl,
.mmap = binder_mmap,
.open = binder_open,
.flush = binder_flush,
.release = binder_release,
};
打开Binder设备-binder_open()

打开binder驱动设备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc;
struct binder_device *binder_dev;

binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n",
current->group_leader->pid, current->pid);

proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
return -ENOMEM;
spin_lock_init(&proc->inner_lock);
spin_lock_init(&proc->outer_lock);
get_task_struct(current->group_leader);
proc->tsk = current->group_leader;
INIT_LIST_HEAD(&proc->todo);
//记录进程优先级
if (binder_supported_policy(current->policy)) {
proc->default_priority.sched_policy = current->policy;
proc->default_priority.prio = current->normal_prio;
} else {
proc->default_priority.sched_policy = SCHED_NORMAL;
proc->default_priority.prio = NICE_TO_PRIO(0);
}

binder_dev = container_of(filp->private_data, struct binder_device,
miscdev);
proc->context = &binder_dev->context;
binder_alloc_init(&proc->alloc);

binder_stats_created(BINDER_STAT_PROC);
proc->pid = current->group_leader->pid;
INIT_LIST_HEAD(&proc->delivered_death);
INIT_LIST_HEAD(&proc->waiting_threads);
filp->private_data = proc;

mutex_lock(&binder_procs_lock);
hlist_add_head(&proc->proc_node, &binder_procs);//创建好的binder_proc对象插入到 binder_procs中
mutex_unlock(&binder_procs_lock);


return 0;
}

创建binder_proc对象,并保存当前进程信息,内部结构如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
struct binder_proc {
struct hlist_node proc_node;
struct rb_root threads; //对应的binder线程
struct rb_root nodes; //binder节点
struct rb_root refs_by_desc;
struct rb_root refs_by_node;
struct list_head waiting_threads;
int pid;
struct task_struct *tsk;
struct hlist_node deferred_work_node;
int deferred_work;
bool is_dead;

struct list_head todo;//当前进程的任务
struct binder_stats stats;
struct list_head delivered_death;
int max_threads; //最大并发线程数
int requested_threads;
int requested_threads_started;
int tmp_ref;
struct binder_priority default_priority;
struct dentry *debugfs_entry;
struct binder_alloc alloc;
struct binder_context *context;
spinlock_t inner_lock;
spinlock_t outer_lock;
};

binder_procs

Binder内存映射-binder_mmap()

首先申内核申请虚拟地址空间,申请一块与用户虚拟内存(*vma)相同大小的内存;
再申请一个1个page大小的物理内存,再将同一块物理内存分别映射到内核虚拟地址空间用户虚拟内存空间

从而实现了用户空间和内核空间的buffer同步操作。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
//drivers/android/binder.c
static int binder_mmap(struct file *filp/*Bidner驱动的fd*/, struct vm_area_struct *vma/*用户虚拟内存*/)
{
int ret;
struct binder_proc *proc = filp->private_data;
const char *failure_string;

//保证映射内存大小不会超过4M
if ((vma->vm_end - vma->vm_start) > SZ_4M)
vma->vm_end = vma->vm_start + SZ_4M;

vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
vma->vm_ops = &binder_vm_ops;
vma->vm_private_data = proc;


ret = binder_alloc_mmap_handler(&proc->alloc, vma);

return ret;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
int binder_alloc_mmap_handler(struct binder_alloc *alloc,
struct vm_area_struct *vma)

{
mutex_lock(&binder_alloc_mmap_lock);
if (alloc->buffer) {
ret = -EBUSY;
failure_string = "already mapped";
goto err_already_mapped;
}
//分配一个连续的内核虚拟空间
area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
...
//分配物理页的指针数组
alloc->pages = kzalloc(sizeof(alloc->pages[0]) *
((vma->vm_end - vma->vm_start) / PAGE_SIZE),
GFP_KERNEL);
if (alloc->pages == NULL) {
ret = -ENOMEM;
failure_string = "alloc page array";
goto err_alloc_pages_failed;
}
alloc->buffer_size = vma->vm_end - vma->vm_start;

buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
if (!buffer) {
ret = -ENOMEM;
failure_string = "alloc buffer struct";
goto err_alloc_buf_struct_failed;
}

buffer->data = alloc->buffer;
list_add(&buffer->entry, &alloc->buffers);
buffer->free = 1;
//
binder_insert_free_buffer(alloc, buffer);
//异步可用空间大小为 buffer总大小的一半
alloc->free_async_space = alloc->buffer_size / 2;
barrier();
alloc->vma = vma;
alloc->vma_vm_mm = vma->vm_mm;
/* Same as mmgrab() in later kernel versions */
atomic_inc(&alloc->vma_vm_mm->mm_count);

return 0;

}

当把同一块物理页面同时映射到进程空间和内核空间时,当需要在两者之间传递数据时,只需要其中任意一方把数据拷贝到物理页面,另一方直接读取即可,也就是说,数据的跨进程传递,只需要一次拷贝就可以完成。

Binder内存管理-binder_ioctl()

负责在两个进程间收发IPC 数据IPC Reply数据。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
static long binder_ioctl(struct file *filp/*binder驱动的fd*/, unsigned int cmd/*ioctl命令*/, unsigned long arg/*数据类型*/)
{
...
switch (cmd) {
case BINDER_WRITE_READ:
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
case BINDER_SET_MAX_THREADS: {
int max_threads;

if (copy_from_user(&max_threads, ubuf,
sizeof(max_threads))) {
ret = -EINVAL;
goto err;
}
binder_inner_proc_lock(proc);
proc->max_threads = max_threads;
binder_inner_proc_unlock(proc);
break;
}
case BINDER_SET_CONTEXT_MGR:
ret = binder_ioctl_set_ctx_mgr(filp);
if (ret)
goto err;
break;

}

}

binder驱动将业务分为多种不同的命令,再根据具体的命令执行不同的业务。常用命令有以下几种:

  • BINDER_WRITE_READ:负责收发Binder IPC数据

    使用场景:Service Manager通过发送BINDER_WRITE_READ命令向Binder驱动读写数据

  • BINDER_SET_MAX_THREADS:设置进程最大binder线程个数

    使用场景:在ProcessState初始化的时候,会设置当前进程支持的最大个数,默认为15,设置的命令为BINDER_SET_MAX_THREADS

  • BINDER_SET_CONTEXT_MGR:设置Service Manager为大管家。

    使用场景:ServiceManager启动过程中调用binder_become_context_manager()命令为BINDER_SET_CONTEXT_MGR

使用最频繁的就是BINDER_WRITE_READ,下面简单的分析一下流程:

1
2
3
4
5
6
7
8
9
10
11
12
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
...
switch (cmd) {
case BINDER_WRITE_READ:
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
...

}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)

{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;

if (size != sizeof(struct binder_write_read)) {
ret = -EINVAL;
goto out;
}
//从用户空间拷贝数据到 bwr(内核空间)
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}

if (bwr.write_size > 0) {
//存在写数据时,执行binder写操作
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
trace_binder_write_done(ret);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}

if (bwr.read_size > 0) {
//存在读数据,执行binder读操作
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
binder_inner_proc_lock(proc);
if (!binder_worklist_empty_ilocked(&proc->todo))
binder_wakeup_proc_ilocked(proc);
binder_inner_proc_unlock(proc);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}

//将内核数据 bwr 拷贝回 用户空间
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}

binder_write_read内核层定义的结构

1
2
3
4
5
6
7
8
struct binder_write_read {//用在binder内部
binder_size_t write_size; //用户空间写入数据的size
binder_size_t write_consumed; //Binder读取了多少数据
binder_uintptr_t write_buffer;//用户空间写入数据
binder_size_t read_size; //binder写入数据的size
binder_size_t read_consumed; //用户空间读取了多少数据
binder_uintptr_t read_buffer; //Binder写入数据
};

binder_transaction_data

binder_ioctl_write_read()主要执行以下几步:

  1. 通过copy_from_user()拷贝用户空间数据内核空间
  2. 存在write_size > 0,意味着外部有数据传入,需要执行binder_thread_write(),读取外部传入数据
  3. 存在read_size > 0,意味着有数据要传出,需要执行binder_thread_read(),写入数据准备传到外部
  4. 最后执行copy_to_user()拷贝内核空间数据用户空间

Binder_write_read过程

总结

Binder驱动加载

Binder通信过程

binder_ipc_process

Binder权限验证

进程A通过Binder调用进程B,然后进程B又Binder调用进程C,此时进程C中的IPCThreadState存储的就是进程APID和UID。此时假如进程B想调用进程C,就会抛出异常Bad call: specified package com.providers.xxx under uid 10032 but it is really 10001

Binder的权限验证回导致进程A调用进程B后,进程B调用原进程方法时失败。

上述流程就是Binder权限验证的流程。

在被调用时进程回去检测是否与自身IPCThreadState存储的uid与pid一致,只有一致才会请求成功。否则抛出异常

解决方案

1
2
3
final long origId = Binder.clearCallingIdentity();
//远程调用过程
Binder.restoreCallingIdentity(origId);

clearCallingIndetity()

在当前线程中重置到来的IPC标识(uid/pid),然后设置mCallingUidmCallingPid为当前进程的值

restoreCallingIdentity(id)

还原前面存储的初始调用者的mCallingPidmCallingUid

Binder-AIDL

全称为Android Interface Definition Language——Android接口定义语言。

Messenger是基于AIDL的,不过只能处理串行的消息,存在大量消息需要同时处理时,也只能一个个处理,这时就需要使用AIDL来处理多消息的情况。

AIDL本质就是系统提供的一种快速实现Binder的工具,不一定需要依赖AIDL去实现功能。

AIDL支持的数据类型

  • 基本数据类型:byte、int、long、float、double、boolean,char
  • String 和 CharSequence
  • ArrayList,HashMap(包括key,每个元素必须可以被AIDL支持)
  • 实现了Parcelabe接口的对象 必须要显示Import进来
  • 所有AIDL接口本身也会被调用必须要显示Import进来

定向tag

除了基本数据类型,其他类型的参数必须加上方向 in,out,inout,用于表示在跨进程通信中的数据流向。

  • in:表示数据只能由客户端流向服务端。服务端会收到这个对象的完整数据,但在服务端对对象进行修改不会对客户端传递进来的对象造成影响。
  • out:表示数据只能由服务端传递到客户端。服务端会接受到这个对象的空对象,但在服务端接收到的空对象有任何修改之后客户端会同步发生变化。
  • inout:表示数据可以在服务端和客户端之间双向流通。服务端会收到这个对象的完整数据,且客户端会同步服务端对该对象的任何改动。

关键类与方法

AIDL文件代码

1
2
3
interface BookManager {
int getResult(int a,out List<String> b,inout List<String> c ,in String d);
}

AIDL文件便已完成后会得到一个Java文件,生成内容主要如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
public interface BookManager extends IInterface {
public static abstract class Stub extends Binder implements BookManager {
public static BookManager asInterface(IBinder obj) {
if ((obj == null)) {
return null;
}
//寻找本地的Binder对象
IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
if (((iin != null) && (iin instanceof BookManager))) {
//本地Binder对象存在,表示当前 Client与 Server处于同一进程,直接调用对应方法即可
return ((BookManager) iin);
}
//不存在本地Binder对象,生成代理对象进行远程调用
return new Proxy(obj);
}

//返回当前Binder对象
// IBinder 这个代表了一种跨进程通信的能力。只要实现了这个接口,这个对象就可以跨进程传输。Client和Server进程都要实现该接口。
@Override
public android.os.IBinder asBinder() {
return this;
}

@Override
public boolean onTransact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
switch (code) {
case TRANSACTION_getResult: {
data.enforceInterface(descriptor);
int _arg0;
_arg0 = data.readInt();
java.util.List<java.lang.String> _arg1;
_arg1 = new java.util.ArrayList<java.lang.String>();
java.util.List<java.lang.String> _arg2;
_arg2 = data.createStringArrayList();
java.lang.String _arg3;
_arg3 = data.readString();
int _result = this.getResult(_arg0, _arg1, _arg2, _arg3);
reply.writeNoException();
reply.writeInt(_result);
reply.writeStringList(_arg1);
reply.writeStringList(_arg2);
return true;
}
}
}

private static class Proxy implements BookManager {
private android.os.IBinder mRemote;

Proxy(android.os.IBinder remote) {
mRemote = remote;
}

@Override
public android.os.IBinder asBinder() {
return mRemote;
}

@Override
public int getResult(int a, java.util.List<java.lang.String> b, java.util.List<java.lang.String> c, java.lang.String d) throws android.os.RemoteException {
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
int _result;
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeInt(a);//int a 默认为 in
_data.writeStringList(c);//inout c
_data.writeString(d);//in String d
//客户端调用 该方法传参到 服务端
mRemote.transact(Stub.TRANSACTION_getResult, _data, _reply, 0);
_reply.readException();
_result = _reply.readInt();
_reply.readStringList(b);//out b
_reply.readStringList(c);//inout c
} finally {
_reply.recycle();
_data.recycle();
}
return _result;
}

static final int TRANSACTION_getResult = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1);
}

public int getResult(int a, List<String> b, List<String> c, String d) throws RemoteException;

}

根据上述生成的代码:

类/接口

IInterface

表示Server进程需要具备什么功能,对应的就是Client进程可以调用的方法。

Stub

一个跨进程调用对象,继承自Binder,表示为Server进程的本地Binder对象,需要实现Server进程可以提供的功能。

Proxy

Binder代理对象,位于Client进程,由其发起transcat()与Binder驱动进行通信。

方法

asBinder()

返回当前进程的Binder对象。

  • 对于Client进程:返回远端进程的BinderProxy代理对象
  • 对于Server进程:返回当前进程的IBinder对象

asInterface()

通常用在bindService()之后,继续在onServiceConnected()调用该方法,就可以把IBinder对象转换成IInterface,就可以直接调用对应Server进程的方法。

onTrancast(int code, Parcel data, Parcel reply, int flags)

参数说明:

  • code:根据code确定Client端请求的方法
  • dataClient方法请求参数
  • replyServer方法返回参数
  • flags:设置IPC模式。
    • 0:双向流通(默认值)
    • 1:单向流程

对应的就是Proxy中各个方法内部调用的mRemote.transcat()传递的参数。

//TODO 关系图

根据上述生成代码,可以大致分析出AIDL内部代码的工作机制

Binder工作机制

  • Client调用远程Binder对象,然后Client挂起,等待Server响应数据
  • Binder代理对象将请求发送给Binder驱动
  • Binder驱动转发请求给Server
  • Server处理完请求后,返回结果到Binder驱动,再回到Client并唤醒

Binder-系统调用

system_call的实现位于内核之中,。

各个地方调用ioctl如何控制可以准确的调用到对应的方法?

通过system_call(),其中第一个参数表示的是对应构造函数的参数个数,第二个就表示可对应要调用的方法

Binder拓展知识

Binder传输数据上限,以及超出会如何?

Binder传输数据的最大限制为1016KB(默认情况下),如果是异步执行,最多只有508KB

调用mmap映射的大小就为1016KB,如果超出这块区域,Binder驱动就无法处理Binder调用,然后会抛出DeadObjectException异常。

每个进程最大Binder线程数,以及超出会如何?

每个进程最多可以运行16个Binder线程

当所有的16个binder线程都在工作时,就会出现线程饥饿状态。如果此时有新的binder线程请求,就会进入阻塞状态。

oneway的作用

异步调用和串行化处理

  • 异步调用:应用向Binder驱动发送数据后不需要挂起线程等待Binder驱动的回复,接收到BR_TRANSACTION_COMPLETE之后就直接结束。
  • 串行化处理:所有oneway方法不会同时执行,Binder驱动会进行串行化处理,保证一个个执行。

oneway都是需要等待BR_TRANSACTION_COMPLETE消息。

不过oneway的请求方式在收到BR_TRANSACTION_COMPLETE消息后,立即返回;

非oneway的请求方式,还需要等到BR_REPLY之后才返回。此时线程就会处于Sleep状态,底层调用的就是wait_event_interruptible()

1
2
3
4
5
//XX.aidl
interface IXX {
//定义该方法调用 为 oneway
oneway void xx();
}

参考链接

Binder介绍

Binder系列讲解

图解Android - Binder 和 Service

深入理解Binder通信原理及面试问题

Android Binder设计与实现-设计篇

Bidner|内存拷贝的本质和变迁


本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!