)]}'
{
  "log": [
    {
      "commit": "eefa864a81501161c05b35f14197677c937e5b9a",
      "tree": "4b17196c4445b5b779848e091251dc16f7df101b",
      "parents": [
        "e2e3224122e64ebe15fe02a63e8fe09b64a8c743"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Wed Jan 17 09:19:32 2018 -0800"
      },
      "committer": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Thu Jan 18 01:51:42 2018 +0100"
      },
      "message": "bpf: change fake_ip for bpf_trace_printk helper\n\nCurrently, for bpf_trace_printk helper, fake ip address 0x1\nis used with comments saying that fake ip will not be printed.\nThis is indeed true for 4.12 and earlier version, but for\n4.13 and later version, the ip address will be printed if\nit cannot be resolved with kallsym. Running samples/bpf/tracex5\nprogram and you will have the following in the debugfs\ntrace_pipe output:\n  ...\n  \u003c...\u003e-1819  [003] ....   443.497877: 0x00000001: mmap\n  \u003c...\u003e-1819  [003] ....   443.498289: 0x00000001: syscall\u003d102 (one of get/set uid/pid/gid)\n  ...\n\nThe kernel commit changed this behavior is:\n  commit feaf1283d11794b9d518fcfd54b6bf8bee1f0b4b\n  Author: Steven Rostedt (VMware) \u003crostedt@goodmis.org\u003e\n  Date:   Thu Jun 22 17:04:55 2017 -0400\n\n      tracing: Show address when function names are not found\n  ...\n\nThis patch changed the comment and also altered the fake ip\naddress to 0x0 as users may think 0x1 has some special meaning\nwhile it doesn\u0027t. The new output:\n  ...\n  \u003c...\u003e-1799  [002] ....    25.953576: 0: mmap\n  \u003c...\u003e-1799  [002] ....    25.953865: 0: read(fd\u003d0, buf\u003d00000000053936b5, size\u003d512)\n  ...\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\n"
    },
    {
      "commit": "540adea3809f61115d2a1ea4ed6e627613452ba1",
      "tree": "03ba07d13807d06d52053b2d02565075f210c2e2",
      "parents": [
        "66665ad2f1023d3ffb0c12eea9e0a6d0b613ecb3"
      ],
      "author": {
        "name": "Masami Hiramatsu",
        "email": "mhiramat@kernel.org",
        "time": "Sat Jan 13 02:55:03 2018 +0900"
      },
      "committer": {
        "name": "Alexei Starovoitov",
        "email": "ast@kernel.org",
        "time": "Fri Jan 12 17:33:38 2018 -0800"
      },
      "message": "error-injection: Separate error-injection from kprobe\n\nSince error-injection framework is not limited to be used\nby kprobes, nor bpf. Other kernel subsystems can use it\nfreely for checking safeness of error-injection, e.g.\nlivepatch, ftrace etc.\nSo this separate error-injection framework from kprobes.\n\nSome differences has been made:\n\n- \"kprobe\" word is removed from any APIs/structures.\n- BPF_ALLOW_ERROR_INJECTION() is renamed to\n  ALLOW_ERROR_INJECTION() since it is not limited for BPF too.\n- CONFIG_FUNCTION_ERROR_INJECTION is the config item of this\n  feature. It is automatically enabled if the arch supports\n  error injection feature for kprobe or ftrace etc.\n\nSigned-off-by: Masami Hiramatsu \u003cmhiramat@kernel.org\u003e\nReviewed-by: Josef Bacik \u003cjbacik@fb.com\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\n"
    },
    {
      "commit": "66665ad2f1023d3ffb0c12eea9e0a6d0b613ecb3",
      "tree": "2ccb32066d1acf7d5d9f56d881cd80133dba8f15",
      "parents": [
        "b4da3340eae2c3932144be3e81ccfd4e424d87b7"
      ],
      "author": {
        "name": "Masami Hiramatsu",
        "email": "mhiramat@kernel.org",
        "time": "Sat Jan 13 02:54:33 2018 +0900"
      },
      "committer": {
        "name": "Alexei Starovoitov",
        "email": "ast@kernel.org",
        "time": "Fri Jan 12 17:33:38 2018 -0800"
      },
      "message": "tracing/kprobe: bpf: Compare instruction pointer with original one\n\nCompare instruction pointer with original one on the\nstack instead using per-cpu bpf_kprobe_override flag.\n\nThis patch also consolidates reset_current_kprobe() and\npreempt_enable_no_resched() blocks. Those can be done\nin one place.\n\nSigned-off-by: Masami Hiramatsu \u003cmhiramat@kernel.org\u003e\nReviewed-by: Josef Bacik \u003cjbacik@fb.com\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\n"
    },
    {
      "commit": "b4da3340eae2c3932144be3e81ccfd4e424d87b7",
      "tree": "0a8d2bfcd4dc6a32524dc2b83d2c9169f9bf124a",
      "parents": [
        "daaf24c634ab951cad3dcef28492001ef9c931d0"
      ],
      "author": {
        "name": "Masami Hiramatsu",
        "email": "mhiramat@kernel.org",
        "time": "Sat Jan 13 02:54:04 2018 +0900"
      },
      "committer": {
        "name": "Alexei Starovoitov",
        "email": "ast@kernel.org",
        "time": "Fri Jan 12 17:33:37 2018 -0800"
      },
      "message": "tracing/kprobe: bpf: Check error injectable event is on function entry\n\nCheck whether error injectable event is on function entry or not.\nCurrently it checks the event is ftrace-based kprobes or not,\nbut that is wrong. It should check if the event is on the entry\nof target function. Since error injection will override a function\nto just return with modified return value, that operation must\nbe done before the target function starts making stackframe.\n\nAs a side effect, bpf error injection is no need to depend on\nfunction-tracer. It can work with sw-breakpoint based kprobe\nevents too.\n\nSigned-off-by: Masami Hiramatsu \u003cmhiramat@kernel.org\u003e\nReviewed-by: Josef Bacik \u003cjbacik@fb.com\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\n"
    },
    {
      "commit": "59436c9ee18d7faad0cd1875c9d8322668f98611",
      "tree": "64543535fdefc11589a24aa9c3e2bab1bd98f894",
      "parents": [
        "c30abd5e40dd863f88e26be09b6ce949145a630a",
        "46df3d209db080395a98fc0875bd05e45e8f44e0"
      ],
      "author": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Dec 18 10:51:06 2017 -0500"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Dec 18 10:51:06 2017 -0500"
      },
      "message": "Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next\n\nDaniel Borkmann says:\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\npull-request: bpf-next 2017-12-18\n\nThe following pull-request contains BPF updates for your *net-next* tree.\n\nThe main changes are:\n\n1) Allow arbitrary function calls from one BPF function to another BPF function.\n   As of today when writing BPF programs, __always_inline had to be used in\n   the BPF C programs for all functions, unnecessarily causing LLVM to inflate\n   code size. Handle this more naturally with support for BPF to BPF calls\n   such that this __always_inline restriction can be overcome. As a result,\n   it allows for better optimized code and finally enables to introduce core\n   BPF libraries in the future that can be reused out of different projects.\n   x86 and arm64 JIT support was added as well, from Alexei.\n\n2) Add infrastructure for tagging functions as error injectable and allow for\n   BPF to return arbitrary error values when BPF is attached via kprobes on\n   those. This way of injecting errors generically eases testing and debugging\n   without having to recompile or restart the kernel. Tags for opting-in for\n   this facility are added with BPF_ALLOW_ERROR_INJECTION(), from Josef.\n\n3) For BPF offload via nfp JIT, add support for bpf_xdp_adjust_head() helper\n   call for XDP programs. First part of this work adds handling of BPF\n   capabilities included in the firmware, and the later patches add support\n   to the nfp verifier part and JIT as well as some small optimizations,\n   from Jakub.\n\n4) The bpftool now also gets support for basic cgroup BPF operations such\n   as attaching, detaching and listing current BPF programs. As a requirement\n   for the attach part, bpftool can now also load object files through\n   \u0027bpftool prog load\u0027. This reuses libbpf which we have in the kernel tree\n   as well. bpftool-cgroup man page is added along with it, from Roman.\n\n5) Back then commit e87c6bc3852b (\"bpf: permit multiple bpf attachments for\n   a single perf event\") added support for attaching multiple BPF programs\n   to a single perf event. Given they are configured through perf\u0027s ioctl()\n   interface, the interface has been extended with a PERF_EVENT_IOC_QUERY_BPF\n   command in this work in order to return an array of one or multiple BPF\n   prog ids that are currently attached, from Yonghong.\n\n6) Various minor fixes and cleanups to the bpftool\u0027s Makefile as well\n   as a new \u0027uninstall\u0027 and \u0027doc-uninstall\u0027 target for removing bpftool\n   itself or prior installed documentation related to it, from Quentin.\n\n7) Add CONFIG_CGROUP_BPF\u003dy to the BPF kernel selftest config file which is\n   required for the test_dev_cgroup test case to run, from Naresh.\n\n8) Fix reporting of XDP prog_flags for nfp driver, from Jakub.\n\n9) Fix libbpf\u0027s exit code from the Makefile when libelf was not found in\n   the system, also from Jakub.\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "f4e2298e63d24bb7f5cf0f56f72867973cb7e652",
      "tree": "fd3dc43e1d9c9c9b14784fd0c3639e2cac7cea04",
      "parents": [
        "553a8f2f42dffc5414a82fffe55d9b8c0fbd383f"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Wed Dec 13 10:35:37 2017 -0800"
      },
      "committer": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Wed Dec 13 22:44:10 2017 +0100"
      },
      "message": "bpf/tracing: fix kernel/events/core.c compilation error\n\nCommit f371b304f12e (\"bpf/tracing: allow user space to\nquery prog array on the same tp\") introduced a perf\nioctl command to query prog array attached to the\nsame perf tracepoint. The commit introduced a\ncompilation error under certain config conditions, e.g.,\n  (1). CONFIG_BPF_SYSCALL is not defined, or\n  (2). CONFIG_TRACING is defined but neither CONFIG_UPROBE_EVENTS\n       nor CONFIG_KPROBE_EVENTS is defined.\n\nError message:\n  kernel/events/core.o: In function `perf_ioctl\u0027:\n  core.c:(.text+0x98c4): undefined reference to `bpf_event_query_prog_array\u0027\n\nThis patch fixed this error by guarding the real definition under\nCONFIG_BPF_EVENTS and provided static inline dummy function\nif CONFIG_BPF_EVENTS was not defined.\nIt renamed the function from bpf_event_query_prog_array to\nperf_event_query_prog_array and moved the definition from linux/bpf.h\nto linux/trace_events.h so the definition is in proximity to\nother prog_array related functions.\n\nFixes: f371b304f12e (\"bpf/tracing: allow user space to query prog array on the same tp\")\nReported-by: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\n"
    },
    {
      "commit": "283ca526a9bd75aed7350220d7b1f8027d99c3fd",
      "tree": "e702c33467d5fc7b2c3b807addc3269774b9f40c",
      "parents": [
        "30791ac41927ebd3e75486f9504b6d2280463bf0"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Tue Dec 12 02:25:30 2017 +0100"
      },
      "committer": {
        "name": "Alexei Starovoitov",
        "email": "ast@kernel.org",
        "time": "Tue Dec 12 09:51:12 2017 -0800"
      },
      "message": "bpf: fix corruption on concurrent perf_event_output calls\n\nWhen tracing and networking programs are both attached in the\nsystem and both use event-output helpers that eventually call\ninto perf_event_output(), then we could end up in a situation\nwhere the tracing attached program runs in user context while\na cls_bpf program is triggered on that same CPU out of softirq\ncontext.\n\nSince both rely on the same per-cpu perf_sample_data, we could\npotentially corrupt it. This can only ever happen in a combination\nof the two types; all tracing programs use a bpf_prog_active\ncounter to bail out in case a program is already running on\nthat CPU out of a different context. XDP and cls_bpf programs\nby themselves don\u0027t have this issue as they run in the same\ncontext only. Therefore, split both perf_sample_data so they\ncannot be accessed from each other.\n\nFixes: 20b9d7ac4852 (\"bpf: avoid excessive stack usage for perf_sample_data\")\nReported-by: Alexei Starovoitov \u003cast@fb.com\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nTested-by: Song Liu \u003csongliubraving@fb.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\n"
    },
    {
      "commit": "9802d86585db91655c7d1929a4f6bbe0952ea88e",
      "tree": "53b334864518dd27b243eafc9ab510ac56ee3b74",
      "parents": [
        "8556e50994c8a8f5282fea008ae084d6d080648a"
      ],
      "author": {
        "name": "Josef Bacik",
        "email": "jbacik@fb.com",
        "time": "Mon Dec 11 11:36:48 2017 -0500"
      },
      "committer": {
        "name": "Alexei Starovoitov",
        "email": "ast@kernel.org",
        "time": "Tue Dec 12 09:02:34 2017 -0800"
      },
      "message": "bpf: add a bpf_override_function helper\n\nError injection is sloppy and very ad-hoc.  BPF could fill this niche\nperfectly with it\u0027s kprobe functionality.  We could make sure errors are\nonly triggered in specific call chains that we care about with very\nspecific situations.  Accomplish this with the bpf_override_funciton\nhelper.  This will modify the probe\u0027d callers return value to the\nspecified value and set the PC to an override function that simply\nreturns, bypassing the originally probed function.  This gives us a nice\nclean way to implement systematic error injection for all of our code\npaths.\n\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Josef Bacik \u003cjbacik@fb.com\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\n"
    },
    {
      "commit": "f371b304f12e31fe30207c41ca7754564e0ea4dc",
      "tree": "e4cfde5755f7538bdab443e0c0845455666ed08e",
      "parents": [
        "63060c39161d3d61c771dee20a3cbdffaf83f1df"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Mon Dec 11 11:39:02 2017 -0800"
      },
      "committer": {
        "name": "Alexei Starovoitov",
        "email": "ast@kernel.org",
        "time": "Tue Dec 12 08:46:40 2017 -0800"
      },
      "message": "bpf/tracing: allow user space to query prog array on the same tp\n\nCommit e87c6bc3852b (\"bpf: permit multiple bpf attachments\nfor a single perf event\") added support to attach multiple\nbpf programs to a single perf event.\nAlthough this provides flexibility, users may want to know\nwhat other bpf programs attached to the same tp interface.\nBesides getting visibility for the underlying bpf system,\nsuch information may also help consolidate multiple bpf programs,\nunderstand potential performance issues due to a large array,\nand debug (e.g., one bpf program which overwrites return code\nmay impact subsequent program results).\n\nCommit 2541517c32be (\"tracing, perf: Implement BPF programs\nattached to kprobes\") utilized the existing perf ioctl\ninterface and added the command PERF_EVENT_IOC_SET_BPF\nto attach a bpf program to a tracepoint. This patch adds a new\nioctl command, given a perf event fd, to query the bpf program\narray attached to the same perf tracepoint event.\n\nThe new uapi ioctl command:\n  PERF_EVENT_IOC_QUERY_BPF\n\nThe new uapi/linux/perf_event.h structure:\n  struct perf_event_query_bpf {\n       __u32\tids_len;\n       __u32\tprog_cnt;\n       __u32\tids[0];\n  };\n\nUser space provides buffer \"ids\" for kernel to copy to.\nWhen returning from the kernel, the number of available\nprograms in the array is set in \"prog_cnt\".\n\nThe usage:\n  struct perf_event_query_bpf *query \u003d\n    malloc(sizeof(*query) + sizeof(u32) * ids_len);\n  query.ids_len \u003d ids_len;\n  err \u003d ioctl(pmu_efd, PERF_EVENT_IOC_QUERY_BPF, query);\n  if (err \u003d\u003d 0) {\n    /* query.prog_cnt is the number of available progs,\n     * number of progs in ids: (ids_len \u003d\u003d 0) ? 0 : query.prog_cnt\n     */\n  } else if (errno \u003d\u003d ENOSPC) {\n    /* query.ids_len number of progs copied,\n     * query.prog_cnt is the number of available progs\n     */\n  } else {\n      /* other errors */\n  }\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nAcked-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\n"
    },
    {
      "commit": "c8c088ba0edf65044c254b96fc438c91914aaab0",
      "tree": "26c2ce03951344a5241e1084cbb68d36275cc61a",
      "parents": [
        "2b279419567105d63f1e524bb1ac34ae8f918e5d"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Thu Nov 30 13:47:54 2017 -0800"
      },
      "committer": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Fri Dec 01 02:56:10 2017 +0100"
      },
      "message": "bpf: set maximum number of attached progs to 64 for a single perf tp\n\ncgropu+bpf prog array has a maximum number of 64 programs.\nLet us apply the same limit here.\n\nFixes: e87c6bc3852b (\"bpf: permit multiple bpf attachments for a single perf event\")\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\n"
    },
    {
      "commit": "a60dd35d2e39209fa7645945e1192bf9769872c6",
      "tree": "1606e891bab8ee8bf102c3f0a3b4319d81a0ef2d",
      "parents": [
        "5c4e1201740ceae9bd6f622851a9bf7c66debe3a"
      ],
      "author": {
        "name": "Gianluca Borello",
        "email": "g.borello@gmail.com",
        "time": "Wed Nov 22 18:32:56 2017 +0000"
      },
      "committer": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Wed Nov 22 21:40:54 2017 +0100"
      },
      "message": "bpf: change bpf_perf_event_output arg5 type to ARG_CONST_SIZE_OR_ZERO\n\nCommit 9fd29c08e520 (\"bpf: improve verifier ARG_CONST_SIZE_OR_ZERO\nsemantics\") relaxed the treatment of ARG_CONST_SIZE_OR_ZERO due to the way\nthe compiler generates optimized BPF code when checking boundaries of an\nargument from C code. A typical example of this optimized code can be\ngenerated using the bpf_perf_event_output helper when operating on variable\nmemory:\n\n/* len is a generic scalar */\nif (len \u003e 0 \u0026\u0026 len \u003c\u003d 0x7fff)\n        bpf_perf_event_output(ctx, \u0026perf_map, 0, buf, len);\n\n110: (79) r5 \u003d *(u64 *)(r10 -40)\n111: (bf) r1 \u003d r5\n112: (07) r1 +\u003d -1\n113: (25) if r1 \u003e 0x7ffe goto pc+6\n114: (bf) r1 \u003d r6\n115: (18) r2 \u003d 0xffff94e5f166c200\n117: (b7) r3 \u003d 0\n118: (bf) r4 \u003d r7\n119: (85) call bpf_perf_event_output#25\nR5 min value is negative, either use unsigned or \u0027var \u0026\u003d const\u0027\n\nWith this code, the verifier loses track of the variable.\n\nReplacing arg5 with ARG_CONST_SIZE_OR_ZERO is thus desirable since it\navoids this quite common case which leads to usability issues, and the\ncompiler generates code that the verifier can more easily test:\n\nif (len \u003c\u003d 0x7fff)\n        bpf_perf_event_output(ctx, \u0026perf_map, 0, buf, len);\n\nor\n\nbpf_perf_event_output(ctx, \u0026perf_map, 0, buf, len \u0026 0x7fff);\n\nNo changes to the bpf_perf_event_output helper are necessary since it can\nhandle a case where size is 0, and an empty frame is pushed.\n\nReported-by: Arnaldo Carvalho de Melo \u003cacme@redhat.com\u003e\nSigned-off-by: Gianluca Borello \u003cg.borello@gmail.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\n"
    },
    {
      "commit": "5c4e1201740ceae9bd6f622851a9bf7c66debe3a",
      "tree": "242f6c483a4b6260b5f69f6b8efa63aaa77857e8",
      "parents": [
        "eb33f2cca49ec49a1b893b5af546e7c042ca6365"
      ],
      "author": {
        "name": "Gianluca Borello",
        "email": "g.borello@gmail.com",
        "time": "Wed Nov 22 18:32:55 2017 +0000"
      },
      "committer": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Wed Nov 22 21:40:54 2017 +0100"
      },
      "message": "bpf: change bpf_probe_read_str arg2 type to ARG_CONST_SIZE_OR_ZERO\n\nCommit 9fd29c08e520 (\"bpf: improve verifier ARG_CONST_SIZE_OR_ZERO\nsemantics\") relaxed the treatment of ARG_CONST_SIZE_OR_ZERO due to the way\nthe compiler generates optimized BPF code when checking boundaries of an\nargument from C code. A typical example of this optimized code can be\ngenerated using the bpf_probe_read_str helper when operating on variable\nmemory:\n\n/* len is a generic scalar */\nif (len \u003e 0 \u0026\u0026 len \u003c\u003d 0x7fff)\n        bpf_probe_read_str(p, len, s);\n\n251: (79) r1 \u003d *(u64 *)(r10 -88)\n252: (07) r1 +\u003d -1\n253: (25) if r1 \u003e 0x7ffe goto pc-42\n254: (bf) r1 \u003d r7\n255: (79) r2 \u003d *(u64 *)(r10 -88)\n256: (bf) r8 \u003d r4\n257: (85) call bpf_probe_read_str#45\nR2 min value is negative, either use unsigned or \u0027var \u0026\u003d const\u0027\n\nWith this code, the verifier loses track of the variable.\n\nReplacing arg2 with ARG_CONST_SIZE_OR_ZERO is thus desirable since it\navoids this quite common case which leads to usability issues, and the\ncompiler generates code that the verifier can more easily test:\n\nif (len \u003c\u003d 0x7fff)\n        bpf_probe_read_str(p, len, s);\n\nor\n\nbpf_probe_read_str(p, len \u0026 0x7fff, s);\n\nNo changes to the bpf_probe_read_str helper are necessary since\nstrncpy_from_unsafe itself immediately returns if the size passed is 0.\n\nSigned-off-by: Gianluca Borello \u003cg.borello@gmail.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\n"
    },
    {
      "commit": "eb33f2cca49ec49a1b893b5af546e7c042ca6365",
      "tree": "0a3aad9face780bd56482c236c5f968bf4ea6dcd",
      "parents": [
        "db1ac4964fa172803a0fea83033cd35d380a8a77"
      ],
      "author": {
        "name": "Gianluca Borello",
        "email": "g.borello@gmail.com",
        "time": "Wed Nov 22 18:32:54 2017 +0000"
      },
      "committer": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Wed Nov 22 21:40:54 2017 +0100"
      },
      "message": "bpf: remove explicit handling of 0 for arg2 in bpf_probe_read\n\nCommit 9c019e2bc4b2 (\"bpf: change helper bpf_probe_read arg2 type to\nARG_CONST_SIZE_OR_ZERO\") changed arg2 type to ARG_CONST_SIZE_OR_ZERO to\nsimplify writing bpf programs by taking advantage of the new semantics\nintroduced for ARG_CONST_SIZE_OR_ZERO which allows \u003c!NULL, 0\u003e arguments.\n\nIn order to prevent the helper from actually passing a NULL pointer to\nprobe_kernel_read, which can happen when \u003cNULL, 0\u003e is passed to the helper,\nthe commit also introduced an explicit check against size \u003d\u003d 0.\n\nAfter the recent introduction of the ARG_PTR_TO_MEM_OR_NULL type,\nbpf_probe_read can not receive a pair of \u003cNULL, 0\u003e arguments anymore, thus\nthe check is not needed anymore and can be removed, since probe_kernel_read\ncan correctly handle a \u003c!NULL, 0\u003e call. This also fixes the semantics of\nthe helper before it gets officially released and bpf programs start\nrelying on this check.\n\nFixes: 9c019e2bc4b2 (\"bpf: change helper bpf_probe_read arg2 type to ARG_CONST_SIZE_OR_ZERO\")\nSigned-off-by: Gianluca Borello \u003cg.borello@gmail.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Yonghong Song \u003cyhs@fb.com\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\n"
    },
    {
      "commit": "9c019e2bc4b2bd8223c8c0d4b6962478b479834d",
      "tree": "638dfe307de950e1e4101cedc34dfd82ee3dd4e1",
      "parents": [
        "9fd29c08e52023252f0480ab8f6906a1ecc9a8d5"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Sun Nov 12 14:49:10 2017 -0800"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Nov 14 16:20:03 2017 +0900"
      },
      "message": "bpf: change helper bpf_probe_read arg2 type to ARG_CONST_SIZE_OR_ZERO\n\nThe helper bpf_probe_read arg2 type is changed\nfrom ARG_CONST_SIZE to ARG_CONST_SIZE_OR_ZERO to permit\nsize-0 buffer. Together with newer ARG_CONST_SIZE_OR_ZERO\nsemantics which allows non-NULL buffer with size 0,\nthis allows simpler bpf programs with verifier acceptance.\nThe previous commit which changes ARG_CONST_SIZE_OR_ZERO semantics\nhas details on examples.\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "f3edacbd697f94a743fff1a3d26910ab99948ba7",
      "tree": "c185057f2e3ae783ad3ccd5b5b96af200d2eb618",
      "parents": [
        "bee955cd3ab4f1a1eb8fc16e7ed69364143df8d7"
      ],
      "author": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Nov 11 18:24:55 2017 +0900"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Nov 11 18:24:55 2017 +0900"
      },
      "message": "bpf: Revert bpf_overrid_function() helper changes.\n\nNACK\u0027d by x86 maintainer.\n\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "dd0bb688eaa241b5655d396d45366cba9225aed9",
      "tree": "80e320112959e90d474fd20e644b8377217dad0b",
      "parents": [
        "54985120a1c461b74f9510e5d730971f2a2383b1"
      ],
      "author": {
        "name": "Josef Bacik",
        "email": "jbacik@fb.com",
        "time": "Tue Nov 07 15:28:42 2017 -0500"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Nov 11 12:18:05 2017 +0900"
      },
      "message": "bpf: add a bpf_override_function helper\n\nError injection is sloppy and very ad-hoc.  BPF could fill this niche\nperfectly with it\u0027s kprobe functionality.  We could make sure errors are\nonly triggered in specific call chains that we care about with very\nspecific situations.  Accomplish this with the bpf_override_funciton\nhelper.  This will modify the probe\u0027d callers return value to the\nspecified value and set the PC to an override function that simply\nreturns, bypassing the originally probed function.  This gives us a nice\nclean way to implement systematic error injection for all of our code\npaths.\n\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: Josef Bacik \u003cjbacik@fb.com\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "07c41a295c5f25928a7cb689fdec816bd0089fe8",
      "tree": "069bf405b725d169ce806019e0d4a612d31b456c",
      "parents": [
        "3051fbec206eb6967b7fdecedb63ebb1ed67a1a7"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Mon Oct 30 13:50:22 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Wed Nov 01 12:35:48 2017 +0900"
      },
      "message": "bpf: avoid rcu_dereference inside bpf_event_mutex lock region\n\nDuring perf event attaching/detaching bpf programs,\nthe tp_event-\u003eprog_array change is protected by the\nbpf_event_mutex lock in both attaching and deteching\nfunctions. Although tp_event-\u003eprog_array is a rcu\npointer, rcu_derefrence is not needed to access it\nsince mutex lock will guarantee ordering.\n\nVerified through \"make C\u003d2\" that sparse\nlocking check still happy with the new change.\n\nAlso change the label name in perf_event_{attach,detach}_bpf_prog\nfrom \"out\" to \"unlock\" to reflect the code action after the label.\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Martin KaFai Lau \u003ckafai@fb.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "035226b964c820f65e201cdf123705a8f1d7c670",
      "tree": "4559b7bd07f54944a2f046b6124a24c9f24e9f14",
      "parents": [
        "392209fa833287a1c5532ffbb098bba584a69dbc"
      ],
      "author": {
        "name": "Gianluca Borello",
        "email": "g.borello@gmail.com",
        "time": "Thu Oct 26 01:47:42 2017 +0000"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Oct 27 22:14:22 2017 +0900"
      },
      "message": "bpf: remove tail_call and get_stackid helper declarations from bpf.h\n\ncommit afdb09c720b6 (\"security: bpf: Add LSM hooks for bpf object related\nsyscall\") included linux/bpf.h in linux/security.h. As a result, bpf\nprograms including bpf_helpers.h and some other header that ends up\npulling in also security.h, such as several examples under samples/bpf,\nfail to compile because bpf_tail_call and bpf_get_stackid are now\n\"redefined as different kind of symbol\".\n\n\u003eFrom bpf.h:\n\nu64 bpf_tail_call(u64 ctx, u64 r2, u64 index, u64 r4, u64 r5);\nu64 bpf_get_stackid(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);\n\nWhereas in bpf_helpers.h they are:\n\nstatic void (*bpf_tail_call)(void *ctx, void *map, int index);\nstatic int (*bpf_get_stackid)(void *ctx, void *map, int flags);\n\nFix this by removing the unused declaration of bpf_tail_call and moving\nthe declaration of bpf_get_stackid in bpf_trace.c, which is the only\nplace where it\u0027s needed.\n\nSigned-off-by: Gianluca Borello \u003cg.borello@gmail.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "e87c6bc3852b981e71c757be20771546ce9f76f3",
      "tree": "bad3be630137d8e873f4ad5a1ea77b4aa1853184",
      "parents": [
        "0b4c6841fee03e096b735074a0c4aab3a8e92986"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Mon Oct 23 23:53:08 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Wed Oct 25 10:47:47 2017 +0900"
      },
      "message": "bpf: permit multiple bpf attachments for a single perf event\n\nThis patch enables multiple bpf attachments for a\nkprobe/uprobe/tracepoint single trace event.\nEach trace_event keeps a list of attached perf events.\nWhen an event happens, all attached bpf programs will\nbe executed based on the order of attachment.\n\nA global bpf_event_mutex lock is introduced to protect\nprog_array attaching and detaching. An alternative will\nbe introduce a mutex lock in every trace_event_call\nstructure, but it takes a lot of extra memory.\nSo a global bpf_event_mutex lock is a good compromise.\n\nThe bpf prog detachment involves allocation of memory.\nIf the allocation fails, a dummy do-nothing program\nwill replace to-be-detached program in-place.\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Martin KaFai Lau \u003ckafai@fb.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "7de16e3a35578f4f5accc6f5f23970310483d0a2",
      "tree": "977607f6b91dfadf039db4643689a8c9a962107a",
      "parents": [
        "386fd5da401dc6c4b0ab6a54d333609876b699fe"
      ],
      "author": {
        "name": "Jakub Kicinski",
        "email": "jakub.kicinski@netronome.com",
        "time": "Mon Oct 16 16:40:53 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Wed Oct 18 14:17:10 2017 +0100"
      },
      "message": "bpf: split verifier and program ops\n\nstruct bpf_verifier_ops contains both verifier ops and operations\nused later during program\u0027s lifetime (test_run).  Split the runtime\nops into a different structure.\n\nBPF_PROG_TYPE() will now append ## _prog_ops or ## _verifier_ops\nto the names.\n\nSigned-off-by: Jakub Kicinski \u003cjakub.kicinski@netronome.com\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "4bebdc7a85aa400c0222b5329861e4ad9252f1e5",
      "tree": "59151679b652bacaf545664158a85b0a4c7c75fe",
      "parents": [
        "020a32d9581ac824d038b0b4e24e977e3cc8589f"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Thu Oct 05 09:19:22 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Oct 07 23:05:57 2017 +0100"
      },
      "message": "bpf: add helper bpf_perf_prog_read_value\n\nThis patch adds helper bpf_perf_prog_read_cvalue for perf event based bpf\nprograms, to read event counter and enabled/running time.\nThe enabled/running time is accumulated since the perf event open.\n\nThe typical use case for perf event based bpf program is to attach itself\nto a single event. In such cases, if it is desirable to get scaling factor\nbetween two bpf invocations, users can can save the time values in a map,\nand use the value from the map and the current value to calculate\nthe scaling factor.\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@fb.com\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "908432ca84fc229e906ba164219e9ad0fe56f755",
      "tree": "042a24e92305abbd98d761b695356d5d82760a61",
      "parents": [
        "97562633bcbac4a07d605ae628d7655fa71caaf5"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Thu Oct 05 09:19:20 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Oct 07 23:05:57 2017 +0100"
      },
      "message": "bpf: add helper bpf_perf_event_read_value for perf event array map\n\nHardware pmu counters are limited resources. When there are more\npmu based perf events opened than available counters, kernel will\nmultiplex these events so each event gets certain percentage\n(but not 100%) of the pmu time. In case that multiplexing happens,\nthe number of samples or counter value will not reflect the\ncase compared to no multiplexing. This makes comparison between\ndifferent runs difficult.\n\nTypically, the number of samples or counter value should be\nnormalized before comparing to other experiments. The typical\nnormalization is done like:\n  normalized_num_samples \u003d num_samples * time_enabled / time_running\n  normalized_counter_value \u003d counter_value * time_enabled / time_running\nwhere time_enabled is the time enabled for event and time_running is\nthe time running for event since last normalization.\n\nThis patch adds helper bpf_perf_event_read_value for kprobed based perf\nevent array map, to read perf counter and enabled/running time.\nThe enabled/running time is accumulated since the perf event open.\nTo achieve scaling factor between two bpf invocations, users\ncan can use cpu_id as the key (which is typical for perf array usage model)\nto remember the previous value and do the calculation inside the\nbpf program.\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@fb.com\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "97562633bcbac4a07d605ae628d7655fa71caaf5",
      "tree": "e1bda190d84a5f9d430b90c3b4934e71be441beb",
      "parents": [
        "bdc476413dcdb5c38a7dec90fb2bca327021273a"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Thu Oct 05 09:19:19 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Oct 07 23:05:57 2017 +0100"
      },
      "message": "bpf: perf event change needed for subsequent bpf helpers\n\nThis patch does not impact existing functionalities.\nIt contains the changes in perf event area needed for\nsubsequent bpf_perf_event_read_value and\nbpf_perf_prog_read_value helpers.\n\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nAcked-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "88a5c690b66110ad255380d8f629c629cf6ca559",
      "tree": "f2c4151bf0deb258d60c0a5dc0e1d13d77fcf8e6",
      "parents": [
        "0e405232871d67bf1b238d56b6b3d500e69ebbf3"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Wed Aug 16 01:45:33 2017 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Aug 15 17:32:15 2017 -0700"
      },
      "message": "bpf: fix bpf_trace_printk on 32 bit archs\n\nJames reported that on MIPS32 bpf_trace_printk() is currently\nbroken while MIPS64 works fine:\n\n  bpf_trace_printk() uses conditional operators to attempt to\n  pass different types to __trace_printk() depending on the\n  format operators. This doesn\u0027t work as intended on 32-bit\n  architectures where u32 and long are passed differently to\n  u64, since the result of C conditional operators follows the\n  \"usual arithmetic conversions\" rules, such that the values\n  passed to __trace_printk() will always be u64 [causing issues\n  later in the va_list handling for vscnprintf()].\n\n  For example the samples/bpf/tracex5 test printed lines like\n  below on MIPS32, where the fd and buf have come from the u64\n  fd argument, and the size from the buf argument:\n\n    [...] 1180.941542: 0x00000001: write(fd\u003d1, buf\u003d  (null), size\u003d6258688)\n\n  Instead of this:\n\n    [...] 1625.616026: 0x00000001: write(fd\u003d1, buf\u003d009e4000, size\u003d512)\n\nOne way to get it working is to expand various combinations\nof argument types into 8 different combinations for 32 bit\nand 64 bit kernels. Fix tested by James on MIPS32 and MIPS64\nas well that it resolves the issue.\n\nFixes: 9c959c863f82 (\"tracing: Allow BPF programs to call bpf_trace_printk()\")\nReported-by: James Hogan \u003cjames.hogan@imgtec.com\u003e\nTested-by: James Hogan \u003cjames.hogan@imgtec.com\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "7bda4b40c5624c3f1c69227f8ebfd46a4b83f2ef",
      "tree": "6d64dd07567eaa2992d1982c70bd1322354461e3",
      "parents": [
        "9780c0ab1a4e64ef6998c4d83f9df5be806a02dc"
      ],
      "author": {
        "name": "John Fastabend",
        "email": "john.fastabend@gmail.com",
        "time": "Sun Jul 02 02:13:29 2017 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jul 03 02:22:52 2017 -0700"
      },
      "message": "bpf: extend bpf_trace_printk to support %i\n\nCurrently, bpf_trace_printk does not support common formatting\nsymbol \u0027%i\u0027 however vsprintf does and is what eventually gets\ncalled by bpf helper. If users are used to \u0027%i\u0027 and currently\nmake use of it, then bpf_trace_printk will just return with\nerror without dumping anything to the trace pipe, so just add\nsupport for \u0027%i\u0027 to the helper.\n\nSigned-off-by: John Fastabend \u003cjohn.fastabend@gmail.com\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "f96da09473b52c09125cc9bf7d7d4576ae8229e0",
      "tree": "5a246cb2a6522950dff8e3a3d4c223e225c99a01",
      "parents": [
        "2be7e212d5419a400d051c84ca9fdd083e5aacac"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Sun Jul 02 02:13:27 2017 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jul 03 02:22:52 2017 -0700"
      },
      "message": "bpf: simplify narrower ctx access\n\nThis work tries to make the semantics and code around the\nnarrower ctx access a bit easier to follow. Right now\neverything is done inside the .is_valid_access(). Offset\nmatching is done differently for read/write types, meaning\nwrites don\u0027t support narrower access and thus matching only\non offsetof(struct foo, bar) is enough whereas for read\ncase that supports narrower access we must check for\noffsetof(struct foo, bar) + offsetof(struct foo, bar) +\nsizeof(\u003cbar\u003e) - 1 for each of the cases. For read cases of\nindividual members that don\u0027t support narrower access (like\npacket pointers or skb-\u003ecb[] case which has its own narrow\naccess logic), we check as usual only offsetof(struct foo,\nbar) like in write case. Then, for the case where narrower\naccess is allowed, we also need to set the aux info for the\naccess. Meaning, ctx_field_size and converted_op_size have\nto be set. First is the original field size e.g. sizeof(\u003cbar\u003e)\nas in above example from the user facing ctx, and latter\none is the target size after actual rewrite happened, thus\nfor the kernel facing ctx. Also here we need the range match\nand we need to keep track changing convert_ctx_access() and\nconverted_op_size from is_valid_access() as both are not at\nthe same location.\n\nWe can simplify the code a bit: check_ctx_access() becomes\nsimpler in that we only store ctx_field_size as a meta data\nand later in convert_ctx_accesses() we fetch the target_size\nright from the location where we do convert. Should the verifier\nbe misconfigured we do reject for BPF_WRITE cases or target_size\nthat are not provided. For the subsystems, we always work on\nranges in is_valid_access() and add small helpers for ranges\nand narrow access, convert_ctx_accesses() sets target_size\nfor the relevant instruction.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: John Fastabend \u003cjohn.fastabend@gmail.com\u003e\nCc: Yonghong Song \u003cyhs@fb.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "239946314e57711d7da546b67964d0b387a3ee42",
      "tree": "958d35fbbbc439b561832c75de22f5fdfa825f7c",
      "parents": [
        "72de46556f8a291b2c72ea1fa22275ffef85e4f9"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Thu Jun 22 15:07:39 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Jun 23 14:04:11 2017 -0400"
      },
      "message": "bpf: possibly avoid extra masking for narrower load in verifier\n\nCommit 31fd85816dbe (\"bpf: permits narrower load from bpf program\ncontext fields\") permits narrower load for certain ctx fields.\nThe commit however will already generate a masking even if\nthe prog-specific ctx conversion produces the result with\nnarrower size.\n\nFor example, for __sk_buff-\u003eprotocol, the ctx conversion\nloads the data into register with 2-byte load.\nA narrower 2-byte load should not generate masking.\nFor __sk_buff-\u003evlan_present, the conversion function\nset the result as either 0 or 1, essentially a byte.\nThe narrower 2-byte or 1-byte load should not generate masking.\n\nTo avoid unnecessary masking, prog-specific *_is_valid_access\nnow passes converted_op_size back to verifier, which indicates\nthe valid data width after perceived future conversion.\nBased on this information, verifier is able to avoid\nunnecessary marking.\n\nSince we want more information back from prog-specific\n*_is_valid_access checking, all of them are packed into\none data structure for more clarity.\n\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "31fd85816dbe3a714bcc3f67c17c3dd87011f79e",
      "tree": "d8c694e4997605254ea96a76c5d633f60ee091cf",
      "parents": [
        "a88e2676a6cd3352c2f590f872233d83d8db289c"
      ],
      "author": {
        "name": "Yonghong Song",
        "email": "yhs@fb.com",
        "time": "Tue Jun 13 15:52:13 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Wed Jun 14 14:56:25 2017 -0400"
      },
      "message": "bpf: permits narrower load from bpf program context fields\n\nCurrently, verifier will reject a program if it contains an\nnarrower load from the bpf context structure. For example,\n        __u8 h \u003d __sk_buff-\u003ehash, or\n        __u16 p \u003d __sk_buff-\u003eprotocol\n        __u32 sample_period \u003d bpf_perf_event_data-\u003esample_period\nwhich are narrower loads of 4-byte or 8-byte field.\n\nThis patch solves the issue by:\n  . Introduce a new parameter ctx_field_size to carry the\n    field size of narrower load from prog type\n    specific *__is_valid_access validator back to verifier.\n  . The non-zero ctx_field_size for a memory access indicates\n    (1). underlying prog type specific convert_ctx_accesses\n         supporting non-whole-field access\n    (2). the current insn is a narrower or whole field access.\n  . In verifier, for such loads where load memory size is\n    less than ctx_field_size, verifier transforms it\n    to a full field load followed by proper masking.\n  . Currently, __sk_buff and bpf_perf_event_data-\u003esample_period\n    are supporting narrowing loads.\n  . Narrower stores are still not allowed as typical ctx stores\n    are just normal stores.\n\nBecause of this change, some tests in verifier will fail and\nthese tests are removed. As a bonus, rename some out of bound\n__sk_buff-\u003ecb access to proper field name and remove two\nredundant \"skb cb oob\" tests.\n\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: Yonghong Song \u003cyhs@fb.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "20b9d7ac48526ce9a14106241e76e8382d126a60",
      "tree": "8e5e133552c45aaf6eddf0e61c88939f2df57695",
      "parents": [
        "41e8e40458a417bbbabfbec5362b8747601e6a3a"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Sun Jun 11 00:50:40 2017 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Jun 10 19:05:45 2017 -0400"
      },
      "message": "bpf: avoid excessive stack usage for perf_sample_data\n\nperf_sample_data consumes 386 bytes on stack, reduce excessive stack\nusage and move it to per cpu buffer. It\u0027s allowed due to preemption\nbeing disabled for tracing, xdp and tc programs, thus at all times\nonly one program can run on a specific CPU and programs cannot run\nfrom interrupt. We similarly also handle bpf_pt_regs.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "f91840a32deef5cb1bf73338bc5010f843b01426",
      "tree": "e7a3eec8f6794fda623941afb426db5c1f8472b0",
      "parents": [
        "5071034e4af709d6783b7d105dc296a5cc84739b"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Fri Jun 02 21:03:52 2017 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sun Jun 04 21:58:01 2017 -0400"
      },
      "message": "perf, bpf: Add BPF support to all perf_event types\n\nAllow BPF_PROG_TYPE_PERF_EVENT program types to attach to all\nperf_event types, including HW_CACHE, RAW, and dynamic pmu events.\nOnly tracepoint/kprobe events are treated differently which require\nBPF_PROG_TYPE_TRACEPOINT/BPF_PROG_TYPE_KPROBE program types accordingly.\n\nAlso add support for reading all event counters using\nbpf_perf_event_read() helper.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "8d65b08debc7e62b2c6032d7fe7389d895b92cbc",
      "tree": "0c3141b60c3a03cc32742b5750c5e763b9dae489",
      "parents": [
        "5a0387a8a8efb90ae7fea1e2e5c62de3efa74691",
        "5d15af6778b8e4ed1fd41b040283af278e7a9a72"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 02 16:40:27 2017 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 02 16:40:27 2017 -0700"
      },
      "message": "Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next\n\nPull networking updates from David Millar:\n \"Here are some highlights from the 2065 networking commits that\n  happened this development cycle:\n\n   1) XDP support for IXGBE (John Fastabend) and thunderx (Sunil Kowuri)\n\n   2) Add a generic XDP driver, so that anyone can test XDP even if they\n      lack a networking device whose driver has explicit XDP support\n      (me).\n\n   3) Sparc64 now has an eBPF JIT too (me)\n\n   4) Add a BPF program testing framework via BPF_PROG_TEST_RUN (Alexei\n      Starovoitov)\n\n   5) Make netfitler network namespace teardown less expensive (Florian\n      Westphal)\n\n   6) Add symmetric hashing support to nft_hash (Laura Garcia Liebana)\n\n   7) Implement NAPI and GRO in netvsc driver (Stephen Hemminger)\n\n   8) Support TC flower offload statistics in mlxsw (Arkadi Sharshevsky)\n\n   9) Multiqueue support in stmmac driver (Joao Pinto)\n\n  10) Remove TCP timewait recycling, it never really could possibly work\n      well in the real world and timestamp randomization really zaps any\n      hint of usability this feature had (Soheil Hassas Yeganeh)\n\n  11) Support level3 vs level4 ECMP route hashing in ipv4 (Nikolay\n      Aleksandrov)\n\n  12) Add socket busy poll support to epoll (Sridhar Samudrala)\n\n  13) Netlink extended ACK support (Johannes Berg, Pablo Neira Ayuso,\n      and several others)\n\n  14) IPSEC hw offload infrastructure (Steffen Klassert)\"\n\n* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (2065 commits)\n  tipc: refactor function tipc_sk_recv_stream()\n  tipc: refactor function tipc_sk_recvmsg()\n  net: thunderx: Optimize page recycling for XDP\n  net: thunderx: Support for XDP header adjustment\n  net: thunderx: Add support for XDP_TX\n  net: thunderx: Add support for XDP_DROP\n  net: thunderx: Add basic XDP support\n  net: thunderx: Cleanup receive buffer allocation\n  net: thunderx: Optimize CQE_TX handling\n  net: thunderx: Optimize RBDR descriptor handling\n  net: thunderx: Support for page recycling\n  ipx: call ipxitf_put() in ioctl error path\n  net: sched: add helpers to handle extended actions\n  qed*: Fix issues in the ptp filter config implementation.\n  qede: Fix concurrency issue in PTP Tx path processing.\n  stmmac: Add support for SIMATIC IOT2000 platform\n  net: hns: fix ethtool_get_strings overflow in hns driver\n  tcp: fix wraparound issue in tcp_lp\n  bpf, arm64: fix jit branch offset related to ldimm64\n  bpf, arm64: implement jiting of BPF_XADD\n  ...\n"
    },
    {
      "commit": "be9370a7d8614d1fa54649c75de14458e79b91ec",
      "tree": "69ab002234e93207d87ad9864028557919c791df",
      "parents": [
        "98601e8bc62d41659eb6478d2f66fb35361597ac"
      ],
      "author": {
        "name": "Johannes Berg",
        "email": "johannes.berg@intel.com",
        "time": "Tue Apr 11 15:34:57 2017 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Apr 11 14:38:43 2017 -0400"
      },
      "message": "bpf: remove struct bpf_prog_type_list\n\nThere\u0027s no need to have struct bpf_prog_type_list since\nit just contains a list_head, the type, and the ops\npointer. Since the types are densely packed and not\nactually dynamically registered, it\u0027s much easier and\nsmaller to have an array of type-\u003eops pointer. Also\ninitialize this array statically to remove code needed\nto initialize it.\n\nIn order to save duplicating the list, move it to a new\nheader file and include it in the places needing it.\n\nSigned-off-by: Johannes Berg \u003cjohannes.berg@intel.com\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "db68ce10c4f0a27c1ff9fa0e789e5c41f8c4ea63",
      "tree": "77eda1d247853a2d414e0047c620b3c72bb11a1a",
      "parents": [
        "aaa2e7ac80f679230faf28a8e12e8d68dbe977eb"
      ],
      "author": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Mon Mar 20 21:08:07 2017 -0400"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Mar 28 16:43:25 2017 -0400"
      },
      "message": "new helper: uaccess_kernel()\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "c78f8bdfa11fcceb9723c61212e4bd8f76c87f9e",
      "tree": "456eca6a892aea9e8cd9fec3b9ca6b380112d7b2",
      "parents": [
        "afcb50ba7f745eea32f91d7f63d6aa88f929f9c4"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Thu Feb 16 22:24:48 2017 +0100"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Feb 17 13:40:04 2017 -0500"
      },
      "message": "bpf: mark all registered map/prog types as __ro_after_init\n\nAll map types and prog types are registered to the BPF core through\nbpf_register_map_type() and bpf_register_prog_type() during init and\nremain unchanged thereafter. As by design we don\u0027t (and never will)\nhave any pluggable code that can register to that at any later point\nin time, lets mark all the existing bpf_{map,prog}_type_list objects\nin the tree as __ro_after_init, so they can be moved to read-only\nsection from then onwards.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "a5e8c07059d0f0b31737408711d44794928ac218",
      "tree": "7b7f4908720025e2d9790fa94e1cfac423bd9881",
      "parents": [
        "0760462860f3e4b04ffd5addafb9c0cc571fbddf"
      ],
      "author": {
        "name": "Gianluca Borello",
        "email": "g.borello@gmail.com",
        "time": "Wed Jan 18 17:55:49 2017 +0000"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Jan 20 12:08:43 2017 -0500"
      },
      "message": "bpf: add bpf_probe_read_str helper\n\nProvide a simple helper with the same semantics of strncpy_from_unsafe():\n\nint bpf_probe_read_str(void *dst, int size, const void *unsafe_addr)\n\nThis gives more flexibility to a bpf program. A typical use case is\nintercepting a file name during sys_open(). The current approach is:\n\nSEC(\"kprobe/sys_open\")\nvoid bpf_sys_open(struct pt_regs *ctx)\n{\n\tchar buf[PATHLEN]; // PATHLEN is defined to 256\n\tbpf_probe_read(buf, sizeof(buf), ctx-\u003edi);\n\n\t/* consume buf */\n}\n\nThis is suboptimal because the size of the string needs to be estimated\nat compile time, causing more memory to be copied than often necessary,\nand can become more problematic if further processing on buf is done,\nfor example by pushing it to userspace via bpf_perf_event_output(),\nsince the real length of the string is unknown and the entire buffer\nmust be copied (and defining an unrolled strnlen() inside the bpf\nprogram is a very inefficient and unfeasible approach).\n\nWith the new helper, the code can easily operate on the actual string\nlength rather than the buffer size:\n\nSEC(\"kprobe/sys_open\")\nvoid bpf_sys_open(struct pt_regs *ctx)\n{\n\tchar buf[PATHLEN]; // PATHLEN is defined to 256\n\tint res \u003d bpf_probe_read_str(buf, sizeof(buf), ctx-\u003edi);\n\n\t/* consume buf, for example push it to userspace via\n\t * bpf_perf_event_output(), but this time we can use\n\t * res (the string length) as event size, after checking\n\t * its boundaries.\n\t */\n}\n\nAnother useful use case is when parsing individual process arguments or\nindividual environment variables navigating current-\u003emm-\u003earg_start and\ncurrent-\u003emm-\u003eenv_start: using this helper and the return value, one can\nquickly iterate at the right offset of the memory area.\n\nThe code changes simply leverage the already existent\nstrncpy_from_unsafe() kernel function, which is safe to be called from a\nbpf program as it is used in bpf_trace_printk().\n\nSigned-off-by: Gianluca Borello \u003cg.borello@gmail.com\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "2d071c643f1cd15a24172de4b5b7ae2adb93abbb",
      "tree": "31fa22a277b92984067d77bb6d2f1edaae1d2adf",
      "parents": [
        "019ec0032e821a7262995af0c81b242dc7e55c9f"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Sun Jan 15 01:34:25 2017 +0100"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jan 16 14:41:42 2017 -0500"
      },
      "message": "bpf, trace: make ctx access checks more robust\n\nMake sure that ctx cannot potentially be accessed oob by asserting\nexplicitly that ctx access size into pt_regs for BPF_PROG_TYPE_KPROBE\nprograms must be within limits. In case some 32bit archs have pt_regs\nnot being a multiple of 8, then BPF_DW access could cause such access.\n\nBPF_PROG_TYPE_KPROBE progs don\u0027t have a ctx conversion function since\nthere\u0027s no extra mapping needed. kprobe_prog_is_valid_access() didn\u0027t\nenforce sizeof(long) as the only allowed access size, since LLVM can\ngenerate non BPF_W/BPF_DW access to regs from time to time.\n\nFor BPF_PROG_TYPE_TRACEPOINT we don\u0027t have a ctx conversion either, so\nadd a BUILD_BUG_ON() check to make sure that BPF_DW access will not be\na similar issue in future (ctx works on event buffer as opposed to\npt_regs there).\n\nFixes: 2541517c32be (\"tracing, perf: Implement BPF programs attached to kprobes\")\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "6b8cc1d11ef75c5b9c530b3d0d148f3c2dd25f93",
      "tree": "36f8bae922c1f926d8b34a489fc7e34064dedd76",
      "parents": [
        "f811b436522d3b9c05302f1785aba61829938a54"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Thu Jan 12 11:51:32 2017 +0100"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Jan 12 10:00:31 2017 -0500"
      },
      "message": "bpf: pass original insn directly to convert_ctx_access\n\nCurrently, when calling convert_ctx_access() callback for the various\nprogram types, we pass in insn-\u003edst_reg, insn-\u003esrc_reg, insn-\u003eoff from\nthe original instruction. This information is needed to rewrite the\ninstruction that is based on the user ctx structure into a kernel\nrepresentation for the ctx. As we\u0027d like to allow access size beyond\njust BPF_W, we\u0027d need also insn-\u003ecode for that in order to decode the\noriginal access size. Given that, lets just pass insn directly to the\nconvert_ctx_access() callback and work on that to not clutter the\ncallback with even more arguments we need to pass when everything is\nalready contained in insn. So lets go through that once, no functional\nchange.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "39f19ebbf57b403695f7b5f9cf322fe1ddb5d7fb",
      "tree": "ad8a37b775d317d2f8166e61a8c689c75a9af7ab",
      "parents": [
        "06c1c049721a995dee2829ad13b24aaf5d7c5cce"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Mon Jan 09 10:19:50 2017 -0800"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jan 09 16:56:27 2017 -0500"
      },
      "message": "bpf: rename ARG_PTR_TO_STACK\n\nsince ARG_PTR_TO_STACK is no longer just pointer to stack\nrename it to ARG_PTR_TO_MEM and adjust comment.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "2d0e30c30f84d08dc16f0f2af41f1b8a85f0755e",
      "tree": "a58da7082e4dcfea4b7782e72aec65920cfd5905",
      "parents": [
        "a10b91b8b81c29b87ff5a6d58c1402898337b956"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Fri Oct 21 12:46:33 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Oct 22 17:05:52 2016 -0400"
      },
      "message": "bpf: add helper for retrieving current numa node id\n\nUse case is mainly for soreuseport to select sockets for the local\nnuma node, but since generic, lets also add this for other networking\nand tracing program types.\n\nSuggested-by: Eric Dumazet \u003cedumazet@google.com\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Eric Dumazet \u003cedumazet@google.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "f3694e00123802d688180e7ae90b240669910e3c",
      "tree": "321a9b95e9df3e64adbc8340a5f63a778db69e70",
      "parents": [
        "374fb54eeaaa6b2cb82bca73a11273687bb2a96a"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Fri Sep 09 02:45:31 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Sep 09 19:36:04 2016 -0700"
      },
      "message": "bpf: add BPF_CALL_x macros for declaring helpers\n\nThis work adds BPF_CALL_\u003cn\u003e() macros and converts all the eBPF helper functions\nto use them, in a similar fashion like we do with SYSCALL_DEFINE\u003cn\u003e() macros\nthat are used today. Motivation for this is to hide all the register handling\nand all necessary casts from the user, so that it is done automatically in the\nbackground when adding a BPF_CALL_\u003cn\u003e() call.\n\nThis makes current helpers easier to review, eases to write future helpers,\navoids getting the casting mess wrong, and allows for extending all helpers at\nonce (f.e. build time checks, etc). It also helps detecting more easily in\ncode reviews that unused registers are not instrumented in the code by accident,\nbreaking compatibility with existing programs.\n\nBPF_CALL_\u003cn\u003e() internals are quite similar to SYSCALL_DEFINE\u003cn\u003e() ones with some\nfundamental differences, for example, for generating the actual helper function\nthat carries all u64 regs, we need to fill unused regs, so that we always end up\nwith 5 u64 regs as an argument.\n\nI reviewed several 0-5 generated BPF_CALL_\u003cn\u003e() variants of the .i results and\nthey look all as expected. No sparse issue spotted. We let this also sit for a\nfew days with Fengguang\u0027s kbuild test robot, and there were no issues seen. On\ns390, it barked on the \"uses dynamic stack allocation\" notice, which is an old\none from bpf_perf_event_output{,_tp}() reappearing here due to the conversion\nto the call wrapper, just telling that the perf raw record/frag sits on stack\n(gcc with s390\u0027s -mwarn-dynamicstack), but that\u0027s all. Did various runtime tests\nand they were fine as well. All eBPF helpers are now converted to use these\nmacros, getting rid of a good chunk of all the raw castings.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "f035a51536af9802f55d8c79bd87f184ebffb093",
      "tree": "b10ca650031a03f3752a1ea9f7178282e8eb0a75",
      "parents": [
        "6088b5823b4cb132a838878747384cbfb5ce6646"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Fri Sep 09 02:45:29 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Sep 09 19:36:04 2016 -0700"
      },
      "message": "bpf: add BPF_SIZEOF and BPF_FIELD_SIZEOF macros\n\nAdd BPF_SIZEOF() and BPF_FIELD_SIZEOF() macros to improve the code a bit\nwhich otherwise often result in overly long bytes_to_bpf_size(sizeof())\nand bytes_to_bpf_size(FIELD_SIZEOF()) lines. So place them into a macro\nhelper instead. Moreover, we currently have a BUILD_BUG_ON(BPF_FIELD_SIZEOF())\ncheck in convert_bpf_extensions(), but we should rather make that generic\nas well and add a BUILD_BUG_ON() test in all BPF_SIZEOF()/BPF_FIELD_SIZEOF()\nusers to detect any rewriter size issues at compile time. Note, there are\ncurrently none, but we want to assert that it stays this way.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "0515e5999a466dfe6e1924f460da599bb6821487",
      "tree": "e4ba954bea80d223248c57885019b7620375164a",
      "parents": [
        "ea2e7ce5d0fc878463ba39deb46cf2ab20398fd2"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Thu Sep 01 18:37:22 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Sep 02 10:46:44 2016 -0700"
      },
      "message": "bpf: introduce BPF_PROG_TYPE_PERF_EVENT program type\n\nIntroduce BPF_PROG_TYPE_PERF_EVENT programs that can be attached to\nHW and SW perf events (PERF_TYPE_HARDWARE and PERF_TYPE_SOFTWARE\ncorrespondingly in uapi/linux/perf_event.h)\n\nThe program visible context meta structure is\nstruct bpf_perf_event_data {\n    struct pt_regs regs;\n     __u64 sample_period;\n};\nwhich is accessible directly from the program:\nint bpf_prog(struct bpf_perf_event_data *ctx)\n{\n  ... ctx-\u003esample_period ...\n  ... ctx-\u003eregs.ip ...\n}\n\nThe bpf verifier rewrites the accesses into kernel internal\nstruct bpf_perf_event_data_kern which allows changing\nstruct perf_sample_data without affecting bpf programs.\nNew fields can be added to the end of struct bpf_perf_event_data\nin the future.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "8937bd80fce64a25be23c7790459d93f7b1e9b79",
      "tree": "62cfc0819d3d6407636c8fc1e4d00a43081e24c3",
      "parents": [
        "1633ac0a2e774a9af339b9290ef33cd97a918c54"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Thu Aug 11 18:17:18 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Aug 12 21:57:05 2016 -0700"
      },
      "message": "bpf: allow bpf_get_prandom_u32() to be used in tracing\n\nbpf_get_prandom_u32() was initially introduced for socket filters\nand later requested numberous times to be added to tracing bpf programs\nfor the same reason as in socket filters: to be able to randomly\nselect incoming events.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "60d20f9195b260bdf0ac10c275ae9f6016f9c069",
      "tree": "6f93dff429db884cf36aabdbb93c7ad9695904f7",
      "parents": [
        "aed704b7a634954dc28fe5c4b49db478cf2d96b7"
      ],
      "author": {
        "name": "Sargun Dhillon",
        "email": "sargun@sargun.me",
        "time": "Fri Aug 12 08:56:52 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Aug 12 21:49:41 2016 -0700"
      },
      "message": "bpf: Add bpf_current_task_under_cgroup helper\n\nThis adds a bpf helper that\u0027s similar to the skb_in_cgroup helper to check\nwhether the probe is currently executing in the context of a specific\nsubset of the cgroupsv2 hierarchy. It does this based on membership test\nfor a cgroup arraymap. It is invalid to call this in an interrupt, and\nit\u0027ll return an error. The helper is primarily to be used in debugging\nactivities for containers, where you may have multiple programs running in\na given top-level \"container\".\n\nSigned-off-by: Sargun Dhillon \u003csargun@sargun.me\u003e\nCc: Alexei Starovoitov \u003cast@kernel.org\u003e\nCc: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "96ae52279594470622ff0585621a13e96b700600",
      "tree": "72b6be55be49c626dfd6d1b1ac2673b4a0cd649b",
      "parents": [
        "9b022a6e0f26af108b9105b16b310393c898d9bd"
      ],
      "author": {
        "name": "Sargun Dhillon",
        "email": "sargun@sargun.me",
        "time": "Mon Jul 25 05:54:46 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jul 25 18:07:48 2016 -0700"
      },
      "message": "bpf: Add bpf_probe_write_user BPF helper to be called in tracers\n\nThis allows user memory to be written to during the course of a kprobe.\nIt shouldn\u0027t be used to implement any kind of security mechanism\nbecause of TOC-TOU attacks, but rather to debug, divert, and\nmanipulate execution of semi-cooperative processes.\n\nAlthough it uses probe_kernel_write, we limit the address space\nthe probe can write into by checking the space with access_ok.\nWe do this as opposed to calling copy_to_user directly, in order\nto avoid sleeping. In addition we ensure the threads\u0027s current fs\n/ segment is USER_DS and the thread isn\u0027t exiting nor a kernel thread.\n\nGiven this feature is meant for experiments, and it has a risk of\ncrashing the system, and running programs, we print a warning on\nwhen a proglet that attempts to use this helper is installed,\nalong with the pid and process name.\n\nSigned-off-by: Sargun Dhillon \u003csargun@sargun.me\u003e\nCc: Alexei Starovoitov \u003cast@kernel.org\u003e\nCc: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "183fc1537ec39be242dc8b619f71fc11b393d295",
      "tree": "e18fa01b3097c5e2d699838098fe7e4d453d296e",
      "parents": [
        "a725ee3e44e39dab1ec82cc745899a785d2a555e"
      ],
      "author": {
        "name": "Andrew Morton",
        "email": "akpm@linux-foundation.org",
        "time": "Mon Jul 18 15:50:58 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Jul 19 19:27:01 2016 -0700"
      },
      "message": "kernel/trace/bpf_trace.c: work around gcc-4.4.4 anon union initialization bug\n\nkernel/trace/bpf_trace.c: In function \u0027bpf_event_output\u0027:\nkernel/trace/bpf_trace.c:312: error: unknown field \u0027next\u0027 specified in initializer\nkernel/trace/bpf_trace.c:312: warning: missing braces around initializer\nkernel/trace/bpf_trace.c:312: warning: (near initialization for \u0027raw.frag.\u003canonymous\u003e\u0027)\n\nFixes: 555c8a8623a3a87 (\"bpf: avoid stack copy and use skb ctx for event output\")\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nCc: Alexei Starovoitov \u003cast@kernel.org\u003e\nCc: David S. Miller \u003cdavem@davemloft.net\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "555c8a8623a3a87b3c990ba30b7fd2e5914e41d2",
      "tree": "51e4fcdea68602c29e21fdd23519a214b2208ed6",
      "parents": [
        "8e7a3920ac277dd4e690c0e70c9750176e3acb83"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Thu Jul 14 18:08:05 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Jul 15 14:23:56 2016 -0700"
      },
      "message": "bpf: avoid stack copy and use skb ctx for event output\n\nThis work addresses a couple of issues bpf_skb_event_output()\nhelper currently has: i) We need two copies instead of just a\nsingle one for the skb data when it should be part of a sample.\nThe data can be non-linear and thus needs to be extracted via\nbpf_skb_load_bytes() helper first, and then copied once again\ninto the ring buffer slot. ii) Since bpf_skb_load_bytes()\ncurrently needs to be used first, the helper needs to see a\nconstant size on the passed stack buffer to make sure BPF\nverifier can do sanity checks on it during verification time.\nThus, just passing skb-\u003elen (or any other non-constant value)\nwouldn\u0027t work, but changing bpf_skb_load_bytes() is also not\nthe proper solution, since the two copies are generally still\nneeded. iii) bpf_skb_load_bytes() is just for rather small\nbuffers like headers, since they need to sit on the limited\nBPF stack anyway. Instead of working around in bpf_skb_load_bytes(),\nthis work improves the bpf_skb_event_output() helper to address\nall 3 at once.\n\nWe can make use of the passed in skb context that we have in\nthe helper anyway, and use some of the reserved flag bits as\na length argument. The helper will use the new __output_custom()\nfacility from perf side with bpf_skb_copy() as callback helper\nto walk and extract the data. It will pass the data for setup\nto bpf_event_output(), which generates and pushes the raw record\nwith an additional frag part. The linear data used in the first\nfrag of the record serves as programmatically defined meta data\npassed along with the appended sample.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "8e7a3920ac277dd4e690c0e70c9750176e3acb83",
      "tree": "2996ef0644920652d639833a2fc99bc2f204f7cf",
      "parents": [
        "7e3f977edd0bd9ea6104156feba95bb5ae9bdd38"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Thu Jul 14 18:08:04 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Jul 15 14:23:56 2016 -0700"
      },
      "message": "bpf, perf: split bpf_perf_event_output\n\nSplit the bpf_perf_event_output() helper as a preparation into\ntwo parts. The new bpf_perf_event_output() will prepare the raw\nrecord itself and test for unknown flags from BPF trace context,\nwhere the __bpf_perf_event_output() does the core work. The\nlatter will be reused later on from bpf_event_output() directly.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "7e3f977edd0bd9ea6104156feba95bb5ae9bdd38",
      "tree": "f4e588e84b4360cd0a3145e00d1cd7cad02ba1ff",
      "parents": [
        "7acef60455c4814a52afb8629d166a3b4dfa0ebb"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Thu Jul 14 18:08:03 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Jul 15 14:23:56 2016 -0700"
      },
      "message": "perf, events: add non-linear data support for raw records\n\nThis patch adds support for non-linear data on raw records. It\nextends raw records to have one or multiple fragments that will\nbe written linearly into the ring slot, where each fragment can\noptionally have a custom callback handler to walk and extract\ncomplex, possibly non-linear data.\n\nIf a callback handler is provided for a fragment, then the new\n__output_custom() will be used instead of __output_copy() for\nthe perf_output_sample() part. perf_prepare_sample() does all\nthe size calculation only once, so perf_output_sample() doesn\u0027t\nneed to redo the same work anymore, meaning real_size and padding\nwill be cached in the raw record. The raw record becomes 32 bytes\nin size without holes; to not increase it further and to avoid\ndoing unnecessary recalculations in fast-path, we can reuse\nnext pointer of the last fragment, idea here is borrowed from\nZERO_OR_NULL_PTR(), which should keep the perf_output_sample()\npath for PERF_SAMPLE_RAW minimal.\n\nThis facility is needed for BPF\u0027s event output helper as a first\nuser that will, in a follow-up, add an additional perf_raw_frag\nto its perf_raw_record in order to be able to more efficiently\ndump skb context after a linear head meta data related to it.\nskbs can be non-linear and thus need a custom output function to\ndump buffers. Currently, the skb data needs to be copied twice;\nwith the help of __output_custom() this work only needs to be\ndone once. Future users could be things like XDP/BPF programs\nthat work on different context though and would thus also have\na different callback function.\n\nThe few users of raw records are adapted to initialize their frag\ndata from the raw record itself, no change in behavior for them.\nThe code is based upon a PoC diff provided by Peter Zijlstra [1].\n\n  [1] http://thread.gmane.org/gmane.linux.network/421294\n\nSuggested-by: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "606274c5abd8e245add01bc7145a8cbb92b69ba8",
      "tree": "762718058c0cf327284f07fa6c1eb2410ee3e0b7",
      "parents": [
        "d390238c4fba7c87a3bcd859ce3373c864eb7b02"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Wed Jul 06 22:38:36 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Jul 09 00:00:16 2016 -0400"
      },
      "message": "bpf: introduce bpf_get_current_task() helper\n\nover time there were multiple requests to access different data\nstructures and fields of task_struct current, so finally add\nthe helper to access \u0027current\u0027 as-is. Tracing bpf programs will do\nthe rest of walking the pointers via bpf_probe_read().\nNote that current can be null and bpf program has to deal it with,\nbut even dumb passing null into bpf_probe_read() is still safe.\n\nSuggested-by: Brendan Gregg \u003cbrendan.d.gregg@gmail.com\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "6816a7ffce32e999601825ddfd887f36d3052932",
      "tree": "99a35abec2ab665d9cd72ae8f143b544d5eef923",
      "parents": [
        "d79313303181d357d293453fb8486bdc87bfd2f4"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Tue Jun 28 12:18:25 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Jun 30 05:54:40 2016 -0400"
      },
      "message": "bpf, trace: add BPF_F_CURRENT_CPU flag for bpf_perf_event_read\n\nFollow-up commit to 1e33759c788c (\"bpf, trace: add BPF_F_CURRENT_CPU\nflag for bpf_perf_event_output\") to add the same functionality into\nbpf_perf_event_read() helper. The split of index into flags and index\ncomponent is also safe here, since such large maps are rejected during\nmap allocation time.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "d79313303181d357d293453fb8486bdc87bfd2f4",
      "tree": "275cae8d173c2fdec0c95b9a797cb1ddad62f266",
      "parents": [
        "1ca1cc98bf7418c680415bfce05699f67510a7fd"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Tue Jun 28 12:18:24 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Jun 30 05:54:40 2016 -0400"
      },
      "message": "bpf, trace: fetch current cpu only once\n\nWe currently have two invocations, which is unnecessary. Fetch it only\nonce and use the smp_processor_id() variant, so we also get preemption\nchecks along with it when DEBUG_PREEMPT is set.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "1ca1cc98bf7418c680415bfce05699f67510a7fd",
      "tree": "c7f9924f35a3645b99208534b887c84077cfb975",
      "parents": [
        "ee58b57100ca953da7320c285315a95db2f7053d"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Tue Jun 28 12:18:23 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Jun 30 05:54:40 2016 -0400"
      },
      "message": "bpf: minor cleanups on fd maps and helpers\n\nSome minor cleanups: i) Remove the unlikely() from fd array map lookups\nand let the CPU branch predictor do its job, scenarios where there is not\nalways a map entry are very well valid. ii) Move the attribute type check\nin the bpf_perf_event_read() helper a bit earlier so it\u0027s consistent wrt\nchecks with bpf_perf_event_output() helper as well. iii) remove some\ncomments that are self-documenting in kprobe_prog_is_valid_access() and\ntherefore make it consistent to tp_prog_is_valid_access() as well.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "ee58b57100ca953da7320c285315a95db2f7053d",
      "tree": "77b815a31240adc4d6326346908137fc6c2c3a96",
      "parents": [
        "6f30e8b022c8e3a722928ddb1a2ae0be852fcc0e",
        "e7bdea7750eb2a64aea4a08fa5c0a31719c8155d"
      ],
      "author": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Jun 30 05:03:36 2016 -0400"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Jun 30 05:03:36 2016 -0400"
      },
      "message": "Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net\n\nSeveral cases of overlapping changes, except the packet scheduler\nconflicts which deal with the addition of the free list parameter\nto qdisc_enqueue().\n\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "3b1efb196eee45b2f0c4994e0c43edb5e367f620",
      "tree": "b4f7d122f21e841f0057c624e064f8ca30622e48",
      "parents": [
        "d056a788765e67773124f520159185bc89f5d1ad"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Wed Jun 15 22:47:14 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Wed Jun 15 23:42:57 2016 -0700"
      },
      "message": "bpf, maps: flush own entries on perf map release\n\nThe behavior of perf event arrays are quite different from all\nothers as they are tightly coupled to perf event fds, f.e. shown\nrecently by commit e03e7ee34fdd (\"perf/bpf: Convert perf_event_array\nto use struct file\") to make refcounting on perf event more robust.\nA remaining issue that the current code still has is that since\nadditions to the perf event array take a reference on the struct\nfile via perf_event_get() and are only released via fput() (that\ncleans up the perf event eventually via perf_event_release_kernel())\nwhen the element is either manually removed from the map from user\nspace or automatically when the last reference on the perf event\nmap is dropped. However, this leads us to dangling struct file\u0027s\nwhen the map gets pinned after the application owning the perf\nevent descriptor exits, and since the struct file reference will\nin such case only be manually dropped or via pinned file removal,\nit leads to the perf event living longer than necessary, consuming\nneedlessly resources for that time.\n\nRelations between perf event fds and bpf perf event map fds can be\nrather complex. F.e. maps can act as demuxers among different perf\nevent fds that can possibly be owned by different threads and based\non the index selection from the program, events get dispatched to\none of the per-cpu fd endpoints. One perf event fd (or, rather a\nper-cpu set of them) can also live in multiple perf event maps at\nthe same time, listening for events. Also, another requirement is\nthat perf event fds can get closed from application side after they\nhave been attached to the perf event map, so that on exit perf event\nmap will take care of dropping their references eventually. Likewise,\nwhen such maps are pinned, the intended behavior is that a user\napplication does bpf_obj_get(), puts its fds in there and on exit\nwhen fd is released, they are dropped from the map again, so the map\nacts rather as connector endpoint. This also makes perf event maps\ninherently different from program arrays as described in more detail\nin commit c9da161c6517 (\"bpf: fix clearing on persistent program\narray maps\").\n\nTo tackle this, map entries are marked by the map struct file that\nadded the element to the map. And when the last reference to that map\nstruct file is released from user space, then the tracked entries\nare purged from the map. This is okay, because new map struct files\ninstances resp. frontends to the anon inode are provided via\nbpf_map_new_fd() that is called when we invoke bpf_obj_get_user()\nfor retrieving a pinned map, but also when an initial instance is\ncreated via map_create(). The rest is resolved by the vfs layer\nautomatically for us by keeping reference count on the map\u0027s struct\nfile. Any concurrent updates on the map slot are fine as well, it\njust means that perf_event_fd_array_release() needs to delete less\nof its own entires.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "ad572d174787daa59e24b8b5c83028c09cdb5ddb",
      "tree": "742f85968aea27bc975164e839dbc00312d3184a",
      "parents": [
        "19de99f70b87fcc3338da52a89c439b088cbff71"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Wed Jun 15 18:25:39 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Wed Jun 15 23:37:54 2016 -0700"
      },
      "message": "bpf, trace: check event type in bpf_perf_event_read\n\nsimilar to bpf_perf_event_output() the bpf_perf_event_read() helper\nneeds to check the type of the perf_event before reading the counter.\n\nFixes: a43eec304259 (\"bpf: introduce bpf_perf_event_output() helper\")\nReported-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "19de99f70b87fcc3338da52a89c439b088cbff71",
      "tree": "43b5ff80043ee9ea62e09fe568502c9d68a188ee",
      "parents": [
        "e582615ad33dbd39623084a02e95567b116e1eea"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Wed Jun 15 18:25:38 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Wed Jun 15 23:37:54 2016 -0700"
      },
      "message": "bpf: fix matching of data/data_end in verifier\n\nThe ctx structure passed into bpf programs is different depending on bpf\nprogram type. The verifier incorrectly marked ctx-\u003edata and ctx-\u003edata_end\naccess based on ctx offset only. That caused loads in tracing programs\nint bpf_prog(struct pt_regs *ctx) { .. ctx-\u003eax .. }\nto be incorrectly marked as PTR_TO_PACKET which later caused verifier\nto reject the program that was actually valid in tracing context.\nFix this by doing program type specific matching of ctx offsets.\n\nFixes: 969bf05eb3ce (\"bpf: direct packet access\")\nReported-by: Sasha Goldshtein \u003cgoldshtn@gmail.com\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "5b6c1b4d46b0dae4edea636a776d09f2064f4cd7",
      "tree": "eb0522bdbb70a44f7942f8636da2b36f54f50ac6",
      "parents": [
        "a27758ffaf96f89002129eedb2cc172d254099f8"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Sat Jun 04 20:50:59 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Jun 07 14:48:03 2016 -0700"
      },
      "message": "bpf, trace: use READ_ONCE for retrieving file ptr\n\nIn bpf_perf_event_read() and bpf_perf_event_output(), we must use\nREAD_ONCE() for fetching the struct file pointer, which could get\nupdated concurrently, so we must prevent the compiler from potential\nrefetching.\n\nWe already do this with tail calls for fetching the related bpf_prog,\nbut not so on stored perf events. Semantics for both are the same\nwith regards to updates.\n\nFixes: a43eec304259 (\"bpf: introduce bpf_perf_event_output() helper\")\nFixes: 35578d798400 (\"bpf: Implement function bpf_perf_event_read() that get the selected hardware PMU conuter\")\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "bd570ff970a54df653b48ed0cfb373f2ebed083d",
      "tree": "85e4ed3aa2bb859cf770247735eb9e7d9a909cb7",
      "parents": [
        "1e33759c788c78f31d4d6f65bac647b23624734c"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Mon Apr 18 21:01:24 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Apr 19 20:26:11 2016 -0400"
      },
      "message": "bpf: add event output helper for notifications/sampling/logging\n\nThis patch adds a new helper for cls/act programs that can push events\nto user space applications. For networking, this can be f.e. for sampling,\ndebugging, logging purposes or pushing of arbitrary wake-up events. The\nidea is similar to a43eec304259 (\"bpf: introduce bpf_perf_event_output()\nhelper\") and 39111695b1b8 (\"samples: bpf: add bpf_perf_event_output example\").\n\nThe eBPF program utilizes a perf event array map that user space populates\nwith fds from perf_event_open(), the eBPF program calls into the helper\nf.e. as skb_event_output(skb, \u0026my_map, BPF_F_CURRENT_CPU, raw, sizeof(raw))\nso that the raw data is pushed into the fd f.e. at the map index of the\ncurrent CPU.\n\nUser space can poll/mmap/etc on this and has a data channel for receiving\nevents that can be post-processed. The nice thing is that since the eBPF\nprogram and user space application making use of it are tightly coupled,\nthey can define their own arbitrary raw data format and what/when they\nwant to push.\n\nWhile f.e. packet headers could be one part of the meta data that is being\npushed, this is not a substitute for things like packet sockets as whole\npacket is not being pushed and push is only done in a single direction.\nIntention is more of a generically usable, efficient event pipe to applications.\nWorkflow is that tc can pin the map and applications can attach themselves\ne.g. after cls/act setup to one or multiple map slots, demuxing is done by\nthe eBPF program.\n\nAdding this facility is with minimal effort, it reuses the helper\nintroduced in a43eec304259 (\"bpf: introduce bpf_perf_event_output() helper\")\nand we get its functionality for free by overloading its BPF_FUNC_ identifier\nfor cls/act programs, ctx is currently unused, but will be made use of in\nfuture. Example will be added to iproute2\u0027s BPF example files.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "1e33759c788c78f31d4d6f65bac647b23624734c",
      "tree": "6fe7627843e67fab42dd888d109f1de03040012d",
      "parents": [
        "553bc087caf052458dc9f92bc42710027740caa9"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Mon Apr 18 21:01:23 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Apr 19 20:26:11 2016 -0400"
      },
      "message": "bpf, trace: add BPF_F_CURRENT_CPU flag for bpf_perf_event_output\n\nAdd a BPF_F_CURRENT_CPU flag to optimize the use-case where user space has\nper-CPU ring buffers and the eBPF program pushes the data into the current\nCPU\u0027s ring buffer which saves us an extra helper function call in eBPF.\nAlso, make sure to properly reserve the remaining flags which are not used.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "266a0a790fb545fa1802a899ac44f61b1d6335a7",
      "tree": "903b92e6f266ed94bf52efa7ca04d7c8809854cc",
      "parents": [
        "b520bd07595b117a08871ebc0a16452cc798d35b"
      ],
      "author": {
        "name": "Arnd Bergmann",
        "email": "arnd@arndb.de",
        "time": "Sat Apr 16 22:29:33 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Apr 18 20:58:55 2016 -0400"
      },
      "message": "bpf: avoid warning for wrong pointer cast\n\nTwo new functions in bpf contain a cast from a \u0027u64\u0027 to a\npointer. This works on 64-bit architectures but causes a warning\non all 32-bit architectures:\n\nkernel/trace/bpf_trace.c: In function \u0027bpf_perf_event_output_tp\u0027:\nkernel/trace/bpf_trace.c:350:13: error: cast to pointer from integer of different size [-Werror\u003dint-to-pointer-cast]\n  u64 ctx \u003d *(long *)r1;\n\nThis changes the cast to first convert the u64 argument into a uintptr_t,\nwhich is guaranteed to be the same size as a pointer.\n\nSigned-off-by: Arnd Bergmann \u003carnd@arndb.de\u003e\nFixes: 9940d67c93b5 (\"bpf: support bpf_get_stackid() and bpf_perf_event_output() in tracepoint programs\")\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "074f528eed408b467516e142fa4c45e5b0d2ba16",
      "tree": "e42352604c4f0db159881faff4b7cef49393d878",
      "parents": [
        "435faee1aae9c1ac231f89e4faf0437bfe29f425"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Wed Apr 13 00:10:52 2016 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Apr 14 21:40:41 2016 -0400"
      },
      "message": "bpf: convert relevant helper args to ARG_PTR_TO_RAW_STACK\n\nThis patch converts all helpers that can use ARG_PTR_TO_RAW_STACK as argument\ntype. For tc programs this is bpf_skb_load_bytes(), bpf_skb_get_tunnel_key(),\nbpf_skb_get_tunnel_opt(). For tracing, this optimizes bpf_get_current_comm()\nand bpf_probe_read(). The check in bpf_skb_load_bytes() for MAX_BPF_STACK can\nalso be removed since the verifier already makes sure we stay within bounds\non stack buffers.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nAcked-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "9940d67c93b5bb7ddcf862b41b1847cb728186c4",
      "tree": "e5b9a36df5bc8bde9b7435cda796d2cefe686e45",
      "parents": [
        "9fd82b610ba3351f05a59c3e9117cfefe82f7751"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Wed Apr 06 18:43:27 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Apr 07 21:04:26 2016 -0400"
      },
      "message": "bpf: support bpf_get_stackid() and bpf_perf_event_output() in tracepoint programs\n\nneeds two wrapper functions to fetch \u0027struct pt_regs *\u0027 to convert\ntracepoint bpf context into kprobe bpf context to reuse existing\nhelper functions\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "9fd82b610ba3351f05a59c3e9117cfefe82f7751",
      "tree": "e48dab2bae5379fbb377f04dc866324b0f9117d0",
      "parents": [
        "98b5c2c65c2951772a8fc661f50d675e450e8bce"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Wed Apr 06 18:43:26 2016 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Apr 07 21:04:26 2016 -0400"
      },
      "message": "bpf: register BPF_PROG_TYPE_TRACEPOINT program type\n\nregister tracepoint bpf program type and let it call the same set\nof helper functions as BPF_PROG_TYPE_KPROBE\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "b121d1e74d1f24654bdc3165d3db1ca149501356",
      "tree": "aa0326edc95e2152a2277386b5363beb7768f7dc",
      "parents": [
        "8aba8b83128a04197991518e241aafd3323b705d"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Mon Mar 07 21:57:13 2016 -0800"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Tue Mar 08 15:28:30 2016 -0500"
      },
      "message": "bpf: prevent kprobe+bpf deadlocks\n\nif kprobe is placed within update or delete hash map helpers\nthat hold bucket spin lock and triggered bpf program is trying to\ngrab the spinlock for the same bucket on the same cpu, it will\ndeadlock.\nFix it by extending existing recursion prevention mechanism.\n\nNote, map_lookup and other tracing helpers don\u0027t have this problem,\nsince they don\u0027t hold any locks and don\u0027t modify global data.\nbpf_trace_printk has its own recursive check and ok as well.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "d5a3b1f691865be576c2bffa708549b8cdccda19",
      "tree": "12f6009f168baee6889a0fde07d60ac3f5c12aac",
      "parents": [
        "568b329a02f75ed3aaae5eb2cca384cb9e09cb29"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@fb.com",
        "time": "Wed Feb 17 19:58:58 2016 -0800"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sat Feb 20 00:21:44 2016 -0500"
      },
      "message": "bpf: introduce BPF_MAP_TYPE_STACK_TRACE\n\nadd new map type to store stack traces and corresponding helper\nbpf_get_stackid(ctx, map, flags) - walk user or kernel stack and return id\n@ctx: struct pt_regs*\n@map: pointer to stack_trace map\n@flags: bits 0-7 - numer of stack frames to skip\n        bit 8 - collect user stack instead of kernel\n        bit 9 - compare stacks by hash only\n        bit 10 - if two different stacks hash into the same stackid\n                 discard old\n        other bits - reserved\nReturn: \u003e\u003d 0 stackid on success or negative error\n\nstackid is a 32-bit integer handle that can be further combined with\nother data (including other stackid) and used as a key into maps.\n\nUserspace will access stackmap using standard lookup/delete syscall commands to\nretrieve full stack trace for given stackid.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "e03e7ee34fdd1c3ef494949a75cb8c61c7265fa9",
      "tree": "17835d21a367a7b6cea78c93c076e7f65843767f",
      "parents": [
        "828b6f0e26170938d617e99a17177453be4d77a3"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "alexei.starovoitov@gmail.com",
        "time": "Mon Jan 25 20:59:49 2016 -0800"
      },
      "committer": {
        "name": "Ingo Molnar",
        "email": "mingo@kernel.org",
        "time": "Fri Jan 29 08:35:25 2016 +0100"
      },
      "message": "perf/bpf: Convert perf_event_array to use struct file\n\nRobustify refcounting.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Alexander Shishkin \u003calexander.shishkin@linux.intel.com\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@infradead.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@redhat.com\u003e\nCc: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nCc: David Ahern \u003cdsahern@gmail.com\u003e\nCc: Jiri Olsa \u003cjolsa@kernel.org\u003e\nCc: Jiri Olsa \u003cjolsa@redhat.com\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Namhyung Kim \u003cnamhyung@kernel.org\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: Stephane Eranian \u003ceranian@google.com\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Vince Weaver \u003cvincent.weaver@maine.edu\u003e\nCc: Wang Nan \u003cwangnan0@huawei.com\u003e\nCc: vince@deater.net\nLink: http://lkml.kernel.org/r/20160126045947.GA40151@ast-mbp.thefacebook.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "27dff4e04199cf0ecf06239a26d0d225d3c046e9",
      "tree": "35398cbc9f53eefe5c98f1f81cca3a482f10a31b",
      "parents": [
        "c68c0fa29341754de86b6e5317b6074f1e334581"
      ],
      "author": {
        "name": "Julia Lawall",
        "email": "Julia.Lawall@lip6.fr",
        "time": "Fri Dec 11 18:35:59 2015 +0100"
      },
      "committer": {
        "name": "Steven Rostedt",
        "email": "rostedt@goodmis.org",
        "time": "Wed Dec 23 14:27:19 2015 -0500"
      },
      "message": "bpf: Constify bpf_verifier_ops structure\n\nThis bpf_verifier_ops structure is never modified, like the other\nbpf_verifier_ops structures, so declare it as const.\n\nDone with the help of Coccinelle.\n\nLink: http://lkml.kernel.org/r/1449855359-13724-1-git-send-email-Julia.Lawall@lip6.fr\n\nSigned-off-by: Julia Lawall \u003cJulia.Lawall@lip6.fr\u003e\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\n"
    },
    {
      "commit": "1075ef5950da97927ae1b3ef76d03e211c4fdb55",
      "tree": "1d09bdae760e16777b9ba81a126bde5a93c83151",
      "parents": [
        "62544ce8e01c1879d420ba309f7f319d24c0f4e6"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Fri Oct 23 14:58:19 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Oct 26 21:53:34 2015 -0700"
      },
      "message": "bpf: make tracing helpers gpl only\n\nexported perf symbols are GPL only, mark eBPF helper functions\nused in tracing as GPL only as well.\n\nSuggested-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "62544ce8e01c1879d420ba309f7f319d24c0f4e6",
      "tree": "394d2f12e6a065b53b3d71bd89ea4b931ce3ec71",
      "parents": [
        "8b7c94e3478dbb0296293b43a974c3561d01e9fb"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Thu Oct 22 17:10:14 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Oct 26 21:49:26 2015 -0700"
      },
      "message": "bpf: fix bpf_perf_event_read() helper\n\nFix safety checks for bpf_perf_event_read():\n- only non-inherited events can be added to perf_event_array map\n  (do this check statically at map insertion time)\n- dynamically check that event is local and !pmu-\u003ecount\nOtherwise buggy bpf program can cause kernel splat.\n\nAlso fix error path after perf_event_attrs()\nand remove redundant \u0027extern\u0027.\n\nFixes: 35578d798400 (\"bpf: Implement function bpf_perf_event_read() that get the selected hardware PMU conuter\")\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nTested-by: Wang Nan \u003cwangnan0@huawei.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "a43eec304259a6c637f4014a6d4767159b6a3aa3",
      "tree": "aecaeb92ff5263f446b002793d89a2a211dc246b",
      "parents": [
        "fa128e6a148a0a58355bd6814c6283515bbd028a"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Tue Oct 20 20:02:34 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu Oct 22 06:42:15 2015 -0700"
      },
      "message": "bpf: introduce bpf_perf_event_output() helper\n\nThis helper is used to send raw data from eBPF program into\nspecial PERF_TYPE_SOFTWARE/PERF_COUNT_SW_BPF_OUTPUT perf_event.\nUser space needs to perf_event_open() it (either for one or all cpus) and\nstore FD into perf_event_array (similar to bpf_perf_event_read() helper)\nbefore eBPF program can send data into it.\n\nToday the programs triggered by kprobe collect the data and either store\nit into the maps or print it via bpf_trace_printk() where latter is the debug\nfacility and not suitable to stream the data. This new helper replaces\nsuch bpf_trace_printk() usage and allows programs to have dedicated\nchannel into user space for post-processing of the raw data collected.\n\nSigned-off-by: Alexei Starovoitov \u003cast@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "8d3b7dce8622919da5c5822ef7338d6604c9fe6e",
      "tree": "8934c8b0f6aac4a01488c59c45b8dc55c64cc721",
      "parents": [
        "1a6877b9c0c2ad901d4335d909432d3bb6d3a330"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Fri Aug 28 15:56:23 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Fri Aug 28 16:27:27 2015 -0700"
      },
      "message": "bpf: add support for %s specifier to bpf_trace_printk()\n\n%s specifier makes bpf program and kernel debugging easier.\nTo make sure that trace_printk won\u0027t crash the unsafe string\nis copied into stack and unsafe pointer is substituted.\n\nThe following C program:\n #include \u003clinux/fs.h\u003e\nint foo(struct pt_regs *ctx, struct filename *filename)\n{\n  void *name \u003d 0;\n\n  bpf_probe_read(\u0026name, sizeof(name), \u0026filename-\u003ename);\n  bpf_trace_printk(\"executed %s\\n\", name);\n  return 0;\n}\n\nwhen attached to kprobe do_execve()\nwill produce output in /sys/kernel/debug/tracing/trace_pipe :\n    make-13492 [002] d..1  3250.997277: : executed /bin/sh\n      sh-13493 [004] d..1  3250.998716: : executed /usr/bin/gcc\n     gcc-13494 [002] d..1  3250.999822: : executed /usr/lib/gcc/x86_64-linux-gnu/4.7/cc1\n     gcc-13495 [002] d..1  3251.006731: : executed /usr/bin/as\n     gcc-13496 [002] d..1  3251.011831: : executed /usr/lib/gcc/x86_64-linux-gnu/4.7/collect2\ncollect2-13497 [000] d..1  3251.012941: : executed /usr/bin/ld\n\nSuggested-by: Brendan Gregg \u003cbrendan.d.gregg@gmail.com\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "35578d7984003097af2b1e34502bc943d40c1804",
      "tree": "b2eca5ddc9446e771dd5a9e1629b12f98b9f2bf0",
      "parents": [
        "ea317b267e9d03a8241893aa176fba7661d07579"
      ],
      "author": {
        "name": "Kaixu Xia",
        "email": "xiakaixu@huawei.com",
        "time": "Thu Aug 06 07:02:35 2015 +0000"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sun Aug 09 22:50:06 2015 -0700"
      },
      "message": "bpf: Implement function bpf_perf_event_read() that get the selected hardware PMU conuter\n\nAccording to the perf_event_map_fd and index, the function\nbpf_perf_event_read() can convert the corresponding map\nvalue to the pointer to struct perf_event and return the\nHardware PMU counter value.\n\nSigned-off-by: Kaixu Xia \u003cxiakaixu@huawei.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "ab1973d3258aa8c40d153dc12bbb1aac56731e47",
      "tree": "6b4c9543550d114d75fad4362c6ef526a0d24b77",
      "parents": [
        "0756ea3e85139d23a8148ebaa95411c2f0aa4f11"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Fri Jun 12 19:39:14 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jun 15 15:53:50 2015 -0700"
      },
      "message": "bpf: let kprobe programs use bpf_get_smp_processor_id() helper\n\nIt\u0027s useful to do per-cpu histograms.\n\nSuggested-by: Daniel Wagner \u003cdaniel.wagner@bmw-carit.de\u003e\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "0756ea3e85139d23a8148ebaa95411c2f0aa4f11",
      "tree": "16b702c8ca6da39fc16188f3bf767d238df8b5ff",
      "parents": [
        "ffeedafbf0236f03aeb2e8db273b3e5ae5f5bc89"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Fri Jun 12 19:39:13 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jun 15 15:53:50 2015 -0700"
      },
      "message": "bpf: allow networking programs to use bpf_trace_printk() for debugging\n\nbpf_trace_printk() is a helper function used to debug eBPF programs.\nLet socket and TC programs use it as well.\nNote, it\u0027s DEBUG ONLY helper. If it\u0027s used in the program,\nthe kernel will print warning banner to make sure users don\u0027t use\nit in production.\n\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "ffeedafbf0236f03aeb2e8db273b3e5ae5f5bc89",
      "tree": "e00f1b0bba1c217afbcf4dda00ef950afdfcafbc",
      "parents": [
        "ada6c1de9ecabcfc5619479bcd29a208f2e248a0"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Fri Jun 12 19:39:12 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Mon Jun 15 15:53:50 2015 -0700"
      },
      "message": "bpf: introduce current-\u003epid, tgid, uid, gid, comm accessors\n\neBPF programs attached to kprobes need to filter based on\ncurrent-\u003epid, uid and other fields, so introduce helper functions:\n\nu64 bpf_get_current_pid_tgid(void)\nReturn: current-\u003etgid \u003c\u003c 32 | current-\u003epid\n\nu64 bpf_get_current_uid_gid(void)\nReturn: current_gid \u003c\u003c 32 | current_uid\n\nbpf_get_current_comm(char *buf, int size_of_buf)\nstores current-\u003ecomm into buf\n\nThey can be used from the programs attached to TC as well to classify packets\nbased on current task fields.\n\nUpdate tracex2 example to print histogram of write syscalls for each process\ninstead of aggregated for all.\n\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "17ca8cbf49be3aa94bb1c2b7ee6545fd70094eb4",
      "tree": "100f160426a26857a776c4b3fd3beb8848bda474",
      "parents": [
        "a24c85abc0815c14d9e5266d06b9acd8a0a57b9a"
      ],
      "author": {
        "name": "Daniel Borkmann",
        "email": "daniel@iogearbox.net",
        "time": "Fri May 29 23:23:06 2015 +0200"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Sun May 31 21:44:44 2015 -0700"
      },
      "message": "ebpf: allow bpf_ktime_get_ns_proto also for networking\n\nAs this is already exported from tracing side via commit d9847d310ab4\n(\"tracing: Allow BPF programs to call bpf_ktime_get_ns()\"), we might\nas well want to move it to the core, so also networking users can make\nuse of it, e.g. to measure diffs for certain flows from ingress/egress.\n\nSigned-off-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nCc: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "04fd61ab36ec065e194ab5e74ae34a5240d992bb",
      "tree": "e14531e8775c71ca0508f97ba25af09d8d3db426",
      "parents": [
        "e7582bab5d28ea72e07cf2c74632eaf46a6c1a50"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Tue May 19 16:59:03 2015 -0700"
      },
      "committer": {
        "name": "David S. Miller",
        "email": "davem@davemloft.net",
        "time": "Thu May 21 17:07:59 2015 -0400"
      },
      "message": "bpf: allow bpf programs to tail-call other bpf programs\n\nintroduce bpf_tail_call(ctx, \u0026jmp_table, index) helper function\nwhich can be used from BPF programs like:\nint bpf_prog(struct pt_regs *ctx)\n{\n  ...\n  bpf_tail_call(ctx, \u0026jmp_table, index);\n  ...\n}\nthat is roughly equivalent to:\nint bpf_prog(struct pt_regs *ctx)\n{\n  ...\n  if (jmp_table[index])\n    return (*jmp_table[index])(ctx);\n  ...\n}\nThe important detail that it\u0027s not a normal call, but a tail call.\nThe kernel stack is precious, so this helper reuses the current\nstack frame and jumps into another BPF program without adding\nextra call frame.\nIt\u0027s trivially done in interpreter and a bit trickier in JITs.\nIn case of x64 JIT the bigger part of generated assembler prologue\nis common for all programs, so it is simply skipped while jumping.\nOther JITs can do similar prologue-skipping optimization or\ndo stack unwind before jumping into the next program.\n\nbpf_tail_call() arguments:\nctx - context pointer\njmp_table - one of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table\nindex - index in the jump table\n\nSince all BPF programs are idenitified by file descriptor, user space\nneed to populate the jmp_table with FDs of other BPF programs.\nIf jmp_table[index] is empty the bpf_tail_call() doesn\u0027t jump anywhere\nand program execution continues as normal.\n\nNew BPF_MAP_TYPE_PROG_ARRAY map type is introduced so that user space can\npopulate this jmp_table array with FDs of other bpf programs.\nPrograms can share the same jmp_table array or use multiple jmp_tables.\n\nThe chain of tail calls can form unpredictable dynamic loops therefore\ntail_call_cnt is used to limit the number of calls and currently is set to 32.\n\nUse cases:\nAcked-by: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n- simplify complex programs by splitting them into a sequence of small programs\n\n- dispatch routine\n  For tracing and future seccomp the program may be triggered on all system\n  calls, but processing of syscall arguments will be different. It\u0027s more\n  efficient to implement them as:\n  int syscall_entry(struct seccomp_data *ctx)\n  {\n     bpf_tail_call(ctx, \u0026syscall_jmp_table, ctx-\u003enr /* syscall number */);\n     ... default: process unknown syscall ...\n  }\n  int sys_write_event(struct seccomp_data *ctx) {...}\n  int sys_read_event(struct seccomp_data *ctx) {...}\n  syscall_jmp_table[__NR_write] \u003d sys_write_event;\n  syscall_jmp_table[__NR_read] \u003d sys_read_event;\n\n  For networking the program may call into different parsers depending on\n  packet format, like:\n  int packet_parser(struct __sk_buff *skb)\n  {\n     ... parse L2, L3 here ...\n     __u8 ipproto \u003d load_byte(skb, ... offsetof(struct iphdr, protocol));\n     bpf_tail_call(skb, \u0026ipproto_jmp_table, ipproto);\n     ... default: process unknown protocol ...\n  }\n  int parse_tcp(struct __sk_buff *skb) {...}\n  int parse_udp(struct __sk_buff *skb) {...}\n  ipproto_jmp_table[IPPROTO_TCP] \u003d parse_tcp;\n  ipproto_jmp_table[IPPROTO_UDP] \u003d parse_udp;\n\n- for TC use case, bpf_tail_call() allows to implement reclassify-like logic\n\n- bpf_map_update_elem/delete calls into BPF_MAP_TYPE_PROG_ARRAY jump table\n  are atomic, so user space can build chains of BPF programs on the fly\n\nImplementation details:\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n- high performance of bpf_tail_call() is the goal.\n  It could have been implemented without JIT changes as a wrapper on top of\n  BPF_PROG_RUN() macro, but with two downsides:\n  . all programs would have to pay performance penalty for this feature and\n    tail call itself would be slower, since mandatory stack unwind, return,\n    stack allocate would be done for every tailcall.\n  . tailcall would be limited to programs running preempt_disabled, since\n    generic \u0027void *ctx\u0027 doesn\u0027t have room for \u0027tail_call_cnt\u0027 and it would\n    need to be either global per_cpu variable accessed by helper and by wrapper\n    or global variable protected by locks.\n\n  In this implementation x64 JIT bypasses stack unwind and jumps into the\n  callee program after prologue.\n\n- bpf_prog_array_compatible() ensures that prog_type of callee and caller\n  are the same and JITed/non-JITed flag is the same, since calling JITed\n  program from non-JITed is invalid, since stack frames are different.\n  Similarly calling kprobe type program from socket type program is invalid.\n\n- jump table is implemented as BPF_MAP_TYPE_PROG_ARRAY to reuse \u0027map\u0027\n  abstraction, its user space API and all of verifier logic.\n  It\u0027s in the existing arraymap.c file, since several functions are\n  shared with regular array map.\n\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n"
    },
    {
      "commit": "9c959c863f8217a2ff3d7c296e8223654d240569",
      "tree": "3e5367b2cb1c54fbe7028f554808b7359f053e19",
      "parents": [
        "d9847d310ab4003725e6ed1822682e24bd406908"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Wed Mar 25 12:49:22 2015 -0700"
      },
      "committer": {
        "name": "Ingo Molnar",
        "email": "mingo@kernel.org",
        "time": "Thu Apr 02 13:25:50 2015 +0200"
      },
      "message": "tracing: Allow BPF programs to call bpf_trace_printk()\n\nDebugging of BPF programs needs some form of printk from the\nprogram, so let programs call limited trace_printk() with %d %u\n%x %p modifiers only.\n\nSimilar to kernel modules, during program load verifier checks\nwhether program is calling bpf_trace_printk() and if so, kernel\nallocates trace_printk buffers and emits big \u0027this is debug\nonly\u0027 banner.\n\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nReviewed-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@infradead.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@redhat.com\u003e\nCc: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nCc: David S. Miller \u003cdavem@davemloft.net\u003e\nCc: Jiri Olsa \u003cjolsa@redhat.com\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Masami Hiramatsu \u003cmasami.hiramatsu.pt@hitachi.com\u003e\nCc: Namhyung Kim \u003cnamhyung@kernel.org\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nLink: http://lkml.kernel.org/r/1427312966-8434-6-git-send-email-ast@plumgrid.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "d9847d310ab4003725e6ed1822682e24bd406908",
      "tree": "ab9935a7f11122988f9eba0290f04ffe572b44ac",
      "parents": [
        "2541517c32be2531e0da59dfd7efc1ce844644f5"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Wed Mar 25 12:49:21 2015 -0700"
      },
      "committer": {
        "name": "Ingo Molnar",
        "email": "mingo@kernel.org",
        "time": "Thu Apr 02 13:25:49 2015 +0200"
      },
      "message": "tracing: Allow BPF programs to call bpf_ktime_get_ns()\n\nbpf_ktime_get_ns() is used by programs to compute time delta\nbetween events or as a timestamp\n\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nReviewed-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@infradead.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@redhat.com\u003e\nCc: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nCc: David S. Miller \u003cdavem@davemloft.net\u003e\nCc: Jiri Olsa \u003cjolsa@redhat.com\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Masami Hiramatsu \u003cmasami.hiramatsu.pt@hitachi.com\u003e\nCc: Namhyung Kim \u003cnamhyung@kernel.org\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nLink: http://lkml.kernel.org/r/1427312966-8434-5-git-send-email-ast@plumgrid.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "2541517c32be2531e0da59dfd7efc1ce844644f5",
      "tree": "a69f215a0bbc2f5db1a5d7aff83a465940e40e01",
      "parents": [
        "72cbbc8994242b5b43753738c01bf07bf29cb70d"
      ],
      "author": {
        "name": "Alexei Starovoitov",
        "email": "ast@plumgrid.com",
        "time": "Wed Mar 25 12:49:20 2015 -0700"
      },
      "committer": {
        "name": "Ingo Molnar",
        "email": "mingo@kernel.org",
        "time": "Thu Apr 02 13:25:49 2015 +0200"
      },
      "message": "tracing, perf: Implement BPF programs attached to kprobes\n\nBPF programs, attached to kprobes, provide a safe way to execute\nuser-defined BPF byte-code programs without being able to crash or\nhang the kernel in any way. The BPF engine makes sure that such\nprograms have a finite execution time and that they cannot break\nout of their sandbox.\n\nThe user interface is to attach to a kprobe via the perf syscall:\n\n\tstruct perf_event_attr attr \u003d {\n\t\t.type\t\u003d PERF_TYPE_TRACEPOINT,\n\t\t.config\t\u003d event_id,\n\t\t...\n\t};\n\n\tevent_fd \u003d perf_event_open(\u0026attr,...);\n\tioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);\n\n\u0027prog_fd\u0027 is a file descriptor associated with BPF program\npreviously loaded.\n\n\u0027event_id\u0027 is an ID of the kprobe created.\n\nClosing \u0027event_fd\u0027:\n\n\tclose(event_fd);\n\n... automatically detaches BPF program from it.\n\nBPF programs can call in-kernel helper functions to:\n\n  - lookup/update/delete elements in maps\n\n  - probe_read - wraper of probe_kernel_read() used to access any\n    kernel data structures\n\nBPF programs receive \u0027struct pt_regs *\u0027 as an input (\u0027struct pt_regs\u0027 is\narchitecture dependent) and return 0 to ignore the event and 1 to store\nkprobe event into the ring buffer.\n\nNote, kprobes are a fundamentally _not_ a stable kernel ABI,\nso BPF programs attached to kprobes must be recompiled for\nevery kernel version and user must supply correct LINUX_VERSION_CODE\nin attr.kern_version during bpf_prog_load() call.\n\nSigned-off-by: Alexei Starovoitov \u003cast@plumgrid.com\u003e\nReviewed-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nReviewed-by: Masami Hiramatsu \u003cmasami.hiramatsu.pt@hitachi.com\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@infradead.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@redhat.com\u003e\nCc: Daniel Borkmann \u003cdaniel@iogearbox.net\u003e\nCc: David S. Miller \u003cdavem@davemloft.net\u003e\nCc: Jiri Olsa \u003cjolsa@redhat.com\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Namhyung Kim \u003cnamhyung@kernel.org\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nLink: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    }
  ]
}
