Berkeley Packet Filter (BPF)

Вопросы программного кода и архитектуры Linux

Модератор: Olej

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 02 июл 2023, 18:46

стр.15 :
Olej писал(а):
02 июл 2023, 15:05
Если вы хотите попробовать этот код самостоятельно, он доступен по адресу https://github.com/lizrice/learning-ebpf в каталоге Chapter2.

Код: Выделить всё

root@R420:/home/olej/2023/own.BOOKs/eBPF/learning-ebpf/chapter2# pwd
/home/olej/2023/own.BOOKs/eBPF/learning-ebpf/chapter2

Код: Выделить всё

root@R420:/home/olej/2023/own.BOOKs/eBPF/learning-ebpf/chapter2# ./hello.py
In file included from <built-in>:2:
In file included from /virtual/include/bcc/bpf.h:12:
In file included from include/linux/types.h:6:
In file included from include/uapi/linux/types.h:14:
In file included from include/uapi/linux/posix_types.h:5:
In file included from include/linux/stddef.h:5:
In file included from include/uapi/linux/stddef.h:5:
In file included from include/linux/compiler_types.h:80:
include/linux/compiler-clang.h:41:9: warning: '__HAVE_BUILTIN_BSWAP32__' macro redefined [-Wmacro-redefined]
#define __HAVE_BUILTIN_BSWAP32__
        ^
<command line>:4:9: note: previous definition is here
#define __HAVE_BUILTIN_BSWAP32__ 1
        ^
In file included from <built-in>:2:
In file included from /virtual/include/bcc/bpf.h:12:
In file included from include/linux/types.h:6:
In file included from include/uapi/linux/types.h:14:
In file included from include/uapi/linux/posix_types.h:5:
In file included from include/linux/stddef.h:5:
In file included from include/uapi/linux/stddef.h:5:
In file included from include/linux/compiler_types.h:80:
include/linux/compiler-clang.h:42:9: warning: '__HAVE_BUILTIN_BSWAP64__' macro redefined [-Wmacro-redefined]
#define __HAVE_BUILTIN_BSWAP64__
        ^
<command line>:5:9: note: previous definition is here
#define __HAVE_BUILTIN_BSWAP64__ 1
        ^
In file included from <built-in>:2:
In file included from /virtual/include/bcc/bpf.h:12:
In file included from include/linux/types.h:6:
In file included from include/uapi/linux/types.h:14:
In file included from include/uapi/linux/posix_types.h:5:
In file included from include/linux/stddef.h:5:
In file included from include/uapi/linux/stddef.h:5:
In file included from include/linux/compiler_types.h:80:
include/linux/compiler-clang.h:43:9: warning: '__HAVE_BUILTIN_BSWAP16__' macro redefined [-Wmacro-redefined]
#define __HAVE_BUILTIN_BSWAP16__
        ^
<command line>:3:9: note: previous definition is here
#define __HAVE_BUILTIN_BSWAP16__ 1
        ^
3 warnings generated.
b'           <...>-85584   [005] d...1 38918.398565: bpf_trace_printk: Hello World!'
b'           <...>-85588   [025] d...1 38949.374168: bpf_trace_printk: Hello World!'
^C
Traceback (most recent call last):
  File "/home/olej/2023/own.BOOKs/eBPF/learning-ebpf/chapter2/./hello.py", line 15, in <module>
    b.trace_print()
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 1332, in trace_print
    line = self.trace_readline(nonblocking=False)
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 1312, in trace_readline
    line = trace.readline(1024).rstrip()
KeyboardInterrupt

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 03 июл 2023, 01:44

BPF для самых маленьких, часть первая: extended BPF
11 авг 2020 в 14:57
BPF для самых маленьких, часть вторая: ... ограмм BPF
22 ноя 2020 в 16:20
Отлаживаем ядро из командной строки с bpftrace
15 фев 2021 в 21:38
Как не быть программистом, раскурить eBPF за сутки и начать мониторить DNS
1 сен 2022 в 10:35
Писать будем на Python и начнём с простейшего - поймём как вообще происходит взаимодействие Python и eBPF.
...
Вначале поставим эти пакеты:

Код: Выделить всё

python3-bpfcc bpfcc-tools libbpfcc linux-headers-$(uname -r)

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 03 июл 2023, 18:19

BPF во многом использует возможности профилировщика ядра Linux prof:
perf: Linux profiling with performance counters
Performance counters are CPU hardware registers that count hardware events such as instructions executed, cache-misses suffered, or branches mispredicted. They form a basis for profiling applications to trace dynamic control flow and identify hotspots. perf provides rich generalized abstractions over hardware specific capabilities. Among others, it provides per task, per CPU and per-workload counters, sampling on top of these and source code event annotation.

Tracepoints are instrumentation points placed at logical locations in code, such as for system calls, TCP/IP events, file system operations, etc. These have negligible overhead when not in use, and can be enabled by the perf command to collect information including timestamps and stack traces. perf can also dynamically create tracepoints using the kprobes and uprobes frameworks, for kernel and userspace dynamic tracing. The possibilities with these are endless.
Как установить и настроить Perf в дистрибутивах Linux
admin11 мая 2021

Код: Выделить всё

olej@R420:~$ aptitude search linux-tools-common
p   linux-tools-common                                                - Linux kernel version specific tools for version 5.15.0                     

olej@R420:~$ sudo apt install linux-tools-common
Чтение списков пакетов… Готово
Построение дерева зависимостей… Готово
Чтение информации о состоянии… Готово         
Следующие НОВЫЕ пакеты будут установлены:
  linux-tools-common
Обновлено 0 пакетов, установлено 1 новых пакетов, для удаления отмечено 0 пакетов, и 2 пакетов не обновлено.
Необходимо скачать 276 kB архивов.
После данной операции объём занятого дискового пространства возрастёт на 850 kB.
Пол:1 http://ubuntu.volia.net/ubuntu-archive jammy-updates/main amd64 linux-tools-common all 5.15.0-76.83 [276 kB]
Получено 276 kB за 0с (937 kB/s)           
Выбор ранее не выбранного пакета linux-tools-common.
(Чтение базы данных … на данный момент установлено 553878 файлов и каталогов.)
Подготовка к распаковке …/linux-tools-common_5.15.0-76.83_all.deb …
Распаковывается linux-tools-common (5.15.0-76.83) …
Настраивается пакет linux-tools-common (5.15.0-76.83) …
Обрабатываются триггеры для man-db (2.10.2-1) …

Код: Выделить всё

olej@R420:~$ uname -r
5.15.0-76-generic

olej@R420:~$ aptitude search linux-tools-5.15.0-76
p   linux-tools-5.15.0-76                                             - Linux kernel version specific tools for version 5.15.0-76                  
p   linux-tools-5.15.0-76-generic                                     - Linux kernel version specific tools for version 5.15.0-76                  
p   linux-tools-5.15.0-76-lowlatency                                  - Linux kernel version specific tools for version 5.15.0-76                  

Код: Выделить всё

olej@R420:~$ sudo apt install linux-tools-5.15.0-76 linux-tools-5.15.0-76-generic
Чтение списков пакетов… Готово
Построение дерева зависимостей… Готово
Чтение информации о состоянии… Готово         
Следующие НОВЫЕ пакеты будут установлены:
  linux-tools-5.15.0-76 linux-tools-5.15.0-76-generic
Обновлено 0 пакетов, установлено 2 новых пакетов, для удаления отмечено 0 пакетов, и 2 пакетов не обновлено.
Необходимо скачать 7.940 kB архивов.
После данной операции объём занятого дискового пространства возрастёт на 27,3 MB.
Пол:1 http://ubuntu.volia.net/ubuntu-archive jammy-updates/main amd64 linux-tools-5.15.0-76 amd64 5.15.0-76.83 [7.938 kB]
Пол:2 http://ubuntu.volia.net/ubuntu-archive jammy-updates/main amd64 linux-tools-5.15.0-76-generic amd64 5.15.0-76.83 [1.792 B]
Получено 7.940 kB за 6с (1.335 kB/s)                      
Выбор ранее не выбранного пакета linux-tools-5.15.0-76.
(Чтение базы данных … на данный момент установлен 553951 файл и каталог.)
Подготовка к распаковке …/linux-tools-5.15.0-76_5.15.0-76.83_amd64.deb …
Распаковывается linux-tools-5.15.0-76 (5.15.0-76.83) …
Выбор ранее не выбранного пакета linux-tools-5.15.0-76-generic.
Подготовка к распаковке …/linux-tools-5.15.0-76-generic_5.15.0-76.83_amd64.deb …
Распаковывается linux-tools-5.15.0-76-generic (5.15.0-76.83) …
Настраивается пакет linux-tools-5.15.0-76 (5.15.0-76.83) …
Настраивается пакет linux-tools-5.15.0-76-generic (5.15.0-76.83) …

Код: Выделить всё

olej@R420:~$ perf -v
perf version 5.15.99

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 04 июл 2023, 00:58

Olej писал(а):
03 июл 2023, 18:19
Как установить и настроить Perf в дистрибутивах Linux
События (перечень) к которым может иметь доступ perf:

Код: Выделить всё

olej@R420:~$ perf list pmu hw sw cache

List of pre-defined events (to be used in -e):

  branch-instructions OR cpu/branch-instructions/    [Kernel PMU event]
  branch-misses OR cpu/branch-misses/                [Kernel PMU event]
  bus-cycles OR cpu/bus-cycles/                      [Kernel PMU event]
  cache-misses OR cpu/cache-misses/                  [Kernel PMU event]
  cache-references OR cpu/cache-references/          [Kernel PMU event]
  cpu-cycles OR cpu/cpu-cycles/                      [Kernel PMU event]
  instructions OR cpu/instructions/                  [Kernel PMU event]
  mem-loads OR cpu/mem-loads/                        [Kernel PMU event]
  mem-stores OR cpu/mem-stores/                      [Kernel PMU event]
  ref-cycles OR cpu/ref-cycles/                      [Kernel PMU event]
  stalled-cycles-frontend OR cpu/stalled-cycles-frontend/ [Kernel PMU event]
  topdown-fetch-bubbles OR cpu/topdown-fetch-bubbles/ [Kernel PMU event]
  topdown-recovery-bubbles OR cpu/topdown-recovery-bubbles/ [Kernel PMU event]
  topdown-slots-issued OR cpu/topdown-slots-issued/  [Kernel PMU event]
  topdown-slots-retired OR cpu/topdown-slots-retired/ [Kernel PMU event]
  topdown-total-slots OR cpu/topdown-total-slots/    [Kernel PMU event]
  cstate_core/c3-residency/                          [Kernel PMU event]
  cstate_core/c6-residency/                          [Kernel PMU event]
  cstate_core/c7-residency/                          [Kernel PMU event]
  cstate_pkg/c2-residency/                           [Kernel PMU event]
  cstate_pkg/c3-residency/                           [Kernel PMU event]
  cstate_pkg/c6-residency/                           [Kernel PMU event]
  cstate_pkg/c7-residency/                           [Kernel PMU event]
  msr/aperf/                                         [Kernel PMU event]
  msr/cpu_thermal_margin/                            [Kernel PMU event]
  msr/mperf/                                         [Kernel PMU event]
  msr/smi/                                           [Kernel PMU event]
  msr/tsc/                                           [Kernel PMU event]
  power/energy-cores/                                [Kernel PMU event]
  power/energy-pkg/                                  [Kernel PMU event]
  power/energy-ram/                                  [Kernel PMU event]
  uncore_imc_0/cas_count_read/                       [Kernel PMU event]
  uncore_imc_0/cas_count_write/                      [Kernel PMU event]
  uncore_imc_0/clockticks/                           [Kernel PMU event]
  uncore_imc_1/cas_count_read/                       [Kernel PMU event]
  uncore_imc_1/cas_count_write/                      [Kernel PMU event]
  uncore_imc_1/clockticks/                           [Kernel PMU event]
  uncore_imc_2/cas_count_read/                       [Kernel PMU event]
  uncore_imc_2/cas_count_write/                      [Kernel PMU event]
  uncore_imc_2/clockticks/                           [Kernel PMU event]
  uncore_imc_3/cas_count_read/                       [Kernel PMU event]
  uncore_imc_3/cas_count_write/                      [Kernel PMU event]
  uncore_imc_3/clockticks/                           [Kernel PMU event]

cache:
  l1d.replacement                                   
       [L1D data line replacements]
  l1d_pend_miss.fb_full                             
       [Cycles a demand request was blocked due to Fill Buffers inavailability]
  l1d_pend_miss.pending                             
       [L1D miss oustandings duration in cycles]
  l1d_pend_miss.pending_cycles                      
       [Cycles with L1D load Misses outstanding]
  l1d_pend_miss.pending_cycles_any                  
       [Cycles with L1D load Misses outstanding from any thread on physical core]
  l2_l1d_wb_rqsts.all                               
       [Not rejected writebacks from L1D to L2 cache lines in any state]
  l2_l1d_wb_rqsts.hit_e                             
       [Not rejected writebacks from L1D to L2 cache lines in E state]
  l2_l1d_wb_rqsts.hit_m                             
       [Not rejected writebacks from L1D to L2 cache lines in M state]
  l2_l1d_wb_rqsts.miss                              
       [Count the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.)]
  l2_lines_in.all                                   
       [L2 cache lines filling L2]
  l2_lines_in.e                                     
       [L2 cache lines in E state filling L2]
  l2_lines_in.i                                     
       [L2 cache lines in I state filling L2]
  l2_lines_in.s                                     
       [L2 cache lines in S state filling L2]
  l2_lines_out.demand_clean                         
       [Clean L2 cache lines evicted by demand]
  l2_lines_out.demand_dirty                         
       [Dirty L2 cache lines evicted by demand]
  l2_lines_out.dirty_all                            
       [Dirty L2 cache lines filling the L2]
  l2_lines_out.pf_clean                             
       [Clean L2 cache lines evicted by L2 prefetch]
  l2_lines_out.pf_dirty                             
       [Dirty L2 cache lines evicted by L2 prefetch]
  l2_rqsts.all_code_rd                              
       [L2 code requests]
  l2_rqsts.all_demand_data_rd                       
       [Demand Data Read requests]
  l2_rqsts.all_pf                                   
       [Requests from L2 hardware prefetchers]
  l2_rqsts.all_rfo                                  
       [RFO requests to L2 cache]
  l2_rqsts.code_rd_hit                              
       [L2 cache hits when fetching instructions, code reads]
  l2_rqsts.code_rd_miss                             
       [L2 cache misses when fetching instructions]
  l2_rqsts.demand_data_rd_hit                       
       [Demand Data Read requests that hit L2 cache]
  l2_rqsts.pf_hit                                   
       [Requests from the L2 hardware prefetchers that hit L2 cache]
  l2_rqsts.pf_miss                                  
       [Requests from the L2 hardware prefetchers that miss L2 cache]
  l2_rqsts.rfo_hit                                  
       [RFO requests that hit L2 cache]
  l2_rqsts.rfo_miss                                 
       [RFO requests that miss L2 cache]
  l2_store_lock_rqsts.all                           
       [RFOs that access cache lines in any state]
  l2_store_lock_rqsts.hit_m                         
       [RFOs that hit cache lines in M state]
  l2_store_lock_rqsts.miss                          
       [RFOs that miss cache lines]
  l2_trans.all_pf                                   
       [L2 or LLC HW prefetches that access L2 cache]
  l2_trans.all_requests                             
       [Transactions accessing L2 pipe]
  l2_trans.code_rd                                  
       [L2 cache accesses when fetching instructions]
  l2_trans.demand_data_rd                           
       [Demand Data Read requests that access L2 cache]
  l2_trans.l1d_wb                                   
       [L1D writebacks that access L2 cache]
  l2_trans.l2_fill                                  
       [L2 fill requests that access L2 cache]
  l2_trans.l2_wb                                    
       [L2 writebacks that access L2 cache]
  l2_trans.rfo                                      
       [RFO requests that access L2 cache]
  lock_cycles.cache_lock_duration                   
       [Cycles when L1D is locked]
  longest_lat_cache.miss                            
       [Core-originated cacheable demand requests missed LLC]
  longest_lat_cache.reference                       
       [Core-originated cacheable demand requests that refer to LLC]
  mem_load_uops_llc_hit_retired.xsnp_hit            
       [Retired load uops which data sources were LLC and cross-core snoop hits in on-pkg core cache (Precise event)]
  mem_load_uops_llc_hit_retired.xsnp_hitm           
       [Retired load uops which data sources were HitM responses from shared LLC (Precise event)]
  mem_load_uops_llc_hit_retired.xsnp_miss           
       [Retired load uops which data sources were LLC hit and cross-core snoop missed in on-pkg core cache (Precise event)]
  mem_load_uops_llc_hit_retired.xsnp_none           
       [Retired load uops which data sources were hits in LLC without snoops required (Precise event)]
  mem_load_uops_llc_miss_retired.local_dram         
       [Retired load uops whose data source was local DRAM (Snoop not needed, Snoop Miss, or Snoop Hit data not forwarded)]
  mem_load_uops_llc_miss_retired.remote_dram        
       [Retired load uops whose data source was remote DRAM (Snoop not needed, Snoop Miss, or Snoop Hit data not forwarded)]
  mem_load_uops_llc_miss_retired.remote_fwd         
       [Data forwarded from remote cache]
  mem_load_uops_llc_miss_retired.remote_hitm        
       [Remote cache HITM]
  mem_load_uops_retired.hit_lfb                     
       [Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready
        (Precise event)]
  mem_load_uops_retired.l1_hit                      
       [Retired load uops with L1 cache hits as data sources (Precise event)]
  mem_load_uops_retired.l1_miss                     
       [Retired load uops which data sources following L1 data-cache miss (Precise event)]
  mem_load_uops_retired.l2_hit                      
       [Retired load uops with L2 cache hits as data sources (Precise event)]
  mem_load_uops_retired.l2_miss                     
       [Retired load uops with L2 cache misses as data sources (Precise event)]
  mem_load_uops_retired.llc_hit                     
       [Retired load uops which data sources were data hits in LLC without snoops required (Precise event)]
  mem_load_uops_retired.llc_miss                    
       [Miss in last-level (L3) cache. Excludes Unknown data-source (Precise event)]
  mem_uops_retired.all_loads                        
       [All retired load uops. (Precise Event)]
  mem_uops_retired.all_stores                       
       [All retired store uops. (Precise Event)]
  mem_uops_retired.lock_loads                       
       [Retired load uops with locked access. (Precise Event)]
  mem_uops_retired.split_loads                      
       [Retired load uops that split across a cacheline boundary. (Precise Event)]
  mem_uops_retired.split_stores                     
       [Retired store uops that split across a cacheline boundary. (Precise Event)]
  mem_uops_retired.stlb_miss_loads                  
       [Retired load uops that miss the STLB. (Precise Event)]
  mem_uops_retired.stlb_miss_stores                 
       [Retired store uops that miss the STLB. (Precise Event)]
  offcore_requests.all_data_rd                      
       [Demand and prefetch data reads]
  offcore_requests.demand_code_rd                   
       [Cacheable and noncachaeble code read requests]
  offcore_requests.demand_data_rd                   
       [Demand Data Read requests sent to uncore]
  offcore_requests.demand_rfo                       
       [Demand RFO requests including regular RFOs, locks, ItoM]
  offcore_requests_buffer.sq_full                   
       [Cases when offcore requests buffer cannot take more entries for core]
  offcore_requests_outstanding.all_data_rd          
       [Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore]
  offcore_requests_outstanding.cycles_with_data_rd  
       [Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore]
  offcore_requests_outstanding.cycles_with_demand_code_rd
       [Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle]
  offcore_requests_outstanding.cycles_with_demand_data_rd
       [Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore]
  offcore_requests_outstanding.cycles_with_demand_rfo
       [Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle]
  offcore_requests_outstanding.demand_code_rd       
       [Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle]
  offcore_requests_outstanding.demand_data_rd       
       [Offcore outstanding Demand Data Read transactions in uncore queue]
  offcore_requests_outstanding.demand_data_rd_ge_6  
       [Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue]
  offcore_requests_outstanding.demand_rfo           
       [Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore]
  offcore_response.all_data_rd.llc_hit.hit_other_core_no_fwd
       [Counts demand & prefetch data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not
        forwarded]
  offcore_response.all_data_rd.llc_hit.hitm_other_core
       [Counts demand & prefetch data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line
        is forwarded]
  offcore_response.all_data_rd.llc_hit.no_snoop_needed
       [Counts demand & prefetch data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or
        the shared line is present in multiple cores]
  offcore_response.all_data_rd.llc_hit.snoop_miss   
       [Counts demand & prefetch data reads that hit in the LLC and sibling core snoop returned a clean response]
  offcore_response.all_pf_data_rd.llc_hit.any_response
       [Counts all prefetch data reads that hit the LLC]
  offcore_response.all_pf_data_rd.llc_hit.hit_other_core_no_fwd
       [Counts prefetch data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded]
  offcore_response.all_pf_data_rd.llc_hit.hitm_other_core
       [Counts prefetch data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is
        forwarded]
  offcore_response.all_pf_data_rd.llc_hit.no_snoop_needed
       [Counts prefetch data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the
        shared line is present in multiple cores]
  offcore_response.all_pf_data_rd.llc_hit.snoop_miss
       [Counts prefetch data reads that hit in the LLC and sibling core snoop returned a clean response]
  offcore_response.all_reads.llc_hit.any_response   
       [Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC]
  offcore_response.all_reads.llc_hit.hit_other_core_no_fwd
       [Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoops to sibling cores hit in either E/S state and the
        line is not forwarded]
  offcore_response.all_reads.llc_hit.hitm_other_core
       [Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state
        and the line is forwarded]
  offcore_response.all_reads.llc_hit.no_snoop_needed
       [Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and sibling core snoops are not needed as either the core-valid
        bit is not set or the shared line is present in multiple cores]
  offcore_response.all_reads.llc_hit.snoop_miss     
       [Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and sibling core snoop returned a clean response]
  offcore_response.corewb.any_response              
       [Counts all writebacks from the core to the LLC]
  offcore_response.demand_code_rd.llc_hit.any_response
       [Counts all demand code reads that hit in the LLC]
  offcore_response.demand_data_rd.llc_hit.any_response
       [Counts all demand data reads that hit in the LLC]
  offcore_response.demand_data_rd.llc_hit.hit_other_core_no_fwd
       [Counts demand data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded]
  offcore_response.demand_data_rd.llc_hit.hitm_other_core
       [Counts demand data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded]
  offcore_response.demand_data_rd.llc_hit.no_snoop_needed
       [Counts demand data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared
        line is present in multiple cores]
  offcore_response.demand_data_rd.llc_hit.snoop_miss
       [Counts demand data reads that hit in the LLC and sibling core snoop returned a clean response]
  offcore_response.demand_rfo.llc_hit.hitm_other_core
       [Counts demand data writes (RFOs) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is
        forwarded]
  offcore_response.other.lru_hints                  
       [Counts L2 hints sent to LLC to keep a line from being evicted out of the core caches]
  offcore_response.other.portio_mmio_uc             
       [Counts miscellaneous accesses that include port i/o, MMIO and uncacheable memory accesses]
  offcore_response.pf_l2_code_rd.llc_hit.any_response
       [Counts all prefetch (that bring data to L2) code reads that hit in the LLC]
  offcore_response.pf_l2_data_rd.llc_hit.any_response
       [Counts prefetch (that bring data to L2) data reads that hit in the LLC]
  offcore_response.pf_l2_data_rd.llc_hit.hit_other_core_no_fwd
       [Counts prefetch (that bring data to L2) data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the
        line is not forwarded]
  offcore_response.pf_l2_data_rd.llc_hit.hitm_other_core
       [Counts prefetch (that bring data to L2) data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state
        and the line is forwarded]
  offcore_response.pf_l2_data_rd.llc_hit.no_snoop_needed
       [Counts prefetch (that bring data to L2) data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid
        bit is not set or the shared line is present in multiple cores]
  offcore_response.pf_l2_data_rd.llc_hit.snoop_miss 
       [Counts prefetch (that bring data to L2) data reads that hit in the LLC and the snoops sent to sibling cores return clean response]
  offcore_response.pf_llc_code_rd.llc_hit.any_response
       [Counts all prefetch (that bring data to LLC only) code reads that hit in the LLC]
  offcore_response.pf_llc_data_rd.llc_hit.any_response
       [Counts prefetch (that bring data to LLC only) data reads that hit in the LLC]
  offcore_response.pf_llc_data_rd.llc_hit.hit_other_core_no_fwd
       [Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and
        the line is not forwarded]
  offcore_response.pf_llc_data_rd.llc_hit.hitm_other_core
       [Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M
        state and the line is forwarded]
  offcore_response.pf_llc_data_rd.llc_hit.no_snoop_needed
       [Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and sibling core snoops are not needed as either the
        core-valid bit is not set or the shared line is present in multiple cores]
  offcore_response.pf_llc_data_rd.llc_hit.snoop_miss
       [Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoops sent to sibling cores return clean response]
  offcore_response.split_lock_uc_lock.any_response  
       [Counts requests where the address of an atomic lock instruction spans a cache line boundary or the lock instruction is executed on
        uncacheable address]
  offcore_response.streaming_stores.any_response    
       [Counts non-temporal stores]
  sq_misc.split_lock                                
       [Split locks in SQ]

floating point:
  fp_assist.any                                     
       [Cycles with any input/output SSE or FP assist]
  fp_assist.simd_input                              
       [Number of SIMD FP assists due to input values]
  fp_assist.simd_output                             
       [Number of SIMD FP assists due to Output values]
  fp_assist.x87_input                               
       [Number of X87 assists due to input value]
  fp_assist.x87_output                              
       [Number of X87 assists due to output value]
  fp_comp_ops_exe.sse_packed_double                 
       [Number of SSE* or AVX-128 FP Computational packed double-precision uops issued this cycle]
  fp_comp_ops_exe.sse_packed_single                 
       [Number of SSE* or AVX-128 FP Computational packed single-precision uops issued this cycle]
  fp_comp_ops_exe.sse_scalar_double                 
       [Number of SSE* or AVX-128 FP Computational scalar double-precision uops issued this cycle]
  fp_comp_ops_exe.sse_scalar_single                 
       [Number of SSE* or AVX-128 FP Computational scalar single-precision uops issued this cycle]
  fp_comp_ops_exe.x87                               
       [Number of FP Computational Uops Executed this cycle. The number of FADD, FSUB, FCOM, FMULs, integer MULsand IMULs, FDIVs, FPREMs, FSQRTS,
        integer DIVs, and IDIVs. This event does not distinguish an FADD used in the middle of a transcendental flow from a s]
  other_assists.avx_store                           
       [Number of GSSE memory assist for stores. GSSE microcode assist is being invoked whenever the hardware is unable to properly handle
        GSSE-256b operations]
  other_assists.avx_to_sse                          
       [Number of transitions from AVX-256 to legacy SSE when penalty applicable]
  other_assists.sse_to_avx                          
       [Number of transitions from SSE to AVX-256 when penalty applicable]
  simd_fp_256.packed_double                         
       [number of AVX-256 Computational FP double precision uops issued this cycle]
  simd_fp_256.packed_single                         
       [number of GSSE-256 Computational FP single precision uops issued this cycle]

frontend:
  dsb2mite_switches.count                           
       [Decode Stream Buffer (DSB)-to-MITE switches]
  dsb2mite_switches.penalty_cycles                  
       [Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles]
  dsb_fill.exceed_dsb_lines                         
       [Cycles when Decode Stream Buffer (DSB) fill encounter more than 3 Decode Stream Buffer (DSB) lines]
  icache.hit                                        
       [Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches]
  icache.ifetch_stall                               
       [Cycles where a code-fetch stalled due to L1 instruction-cache miss or an iTLB miss]
  icache.misses                                     
       [Instruction cache, streaming buffer and victim cache misses]
  idq.all_dsb_cycles_4_uops                         
       [Cycles Decode Stream Buffer (DSB) is delivering 4 Uops]
  idq.all_dsb_cycles_any_uops                       
       [Cycles Decode Stream Buffer (DSB) is delivering any Uop]
  idq.all_mite_cycles_4_uops                        
       [Cycles MITE is delivering 4 Uops]
  idq.all_mite_cycles_any_uops                      
       [Cycles MITE is delivering any Uop]
  idq.dsb_cycles                                    
       [Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path]
  idq.dsb_uops                                      
       [Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path]
  idq.empty                                         
       [Instruction Decode Queue (IDQ) empty cycles]
  idq.mite_all_uops                                 
       [Uops delivered to Instruction Decode Queue (IDQ) from MITE path]
  idq.mite_cycles                                   
       [Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path]
  idq.mite_uops                                     
       [Uops delivered to Instruction Decode Queue (IDQ) from MITE path]
  idq.ms_cycles                                     
       [Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy]
  idq.ms_dsb_cycles                                 
       [Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequenser
        (MS) is busy]
  idq.ms_dsb_occur                                  
       [Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequenser (MS) is busy]
  idq.ms_dsb_uops                                   
       [Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequenser (MS) is
        busy]
  idq.ms_mite_uops                                  
       [Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy]
  idq.ms_switches                                   
       [Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer]
  idq.ms_uops                                       
       [Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy]
  idq_uops_not_delivered.core                       
       [Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled]
  idq_uops_not_delivered.cycles_0_uops_deliv.core   
       [Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled]
  idq_uops_not_delivered.cycles_fe_was_ok           
       [Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE]
  idq_uops_not_delivered.cycles_le_1_uop_deliv.core 
       [Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled]
  idq_uops_not_delivered.cycles_le_2_uop_deliv.core 
       [Cycles with less than 2 uops delivered by the front end]
  idq_uops_not_delivered.cycles_le_3_uop_deliv.core 
       [Cycles with less than 3 uops delivered by the front end]

memory:
  machine_clears.memory_ordering                    
       [Counts the number of machine clears due to memory order conflicts]
  mem_trans_retired.load_latency_gt_128             
       [Loads with latency value being above 128 (Must be precise)]
  mem_trans_retired.load_latency_gt_16              
       [Loads with latency value being above 16 (Must be precise)]
  mem_trans_retired.load_latency_gt_256             
       [Loads with latency value being above 256 (Must be precise)]
  mem_trans_retired.load_latency_gt_32              
       [Loads with latency value being above 32 (Must be precise)]
  mem_trans_retired.load_latency_gt_4               
       [Loads with latency value being above 4 (Must be precise)]
  mem_trans_retired.load_latency_gt_512             
       [Loads with latency value being above 512 (Must be precise)]
  mem_trans_retired.load_latency_gt_64              
       [Loads with latency value being above 64 (Must be precise)]
  mem_trans_retired.load_latency_gt_8               
       [Loads with latency value being above 8 (Must be precise)]
  mem_trans_retired.precise_store                   
       [Sample stores and collect precise store operation via PEBS record. PMC3 only (Must be precise)]
  misalign_mem_ref.loads                            
       [Speculative cache line split load uops dispatched to L1 cache]
  misalign_mem_ref.stores                           
       [Speculative cache line split STA uops dispatched to L1 cache]
  offcore_response.all_code_rd.llc_miss.any_response
       [Counts all demand & prefetch code reads that miss the LLC]
  offcore_response.all_code_rd.llc_miss.remote_dram 
       [Counts all demand & prefetch code reads that miss the LLC and the data returned from remote dram]
  offcore_response.all_code_rd.llc_miss.remote_hit_forward
       [Counts all demand & prefetch code reads that miss the LLC and the data forwarded from remote cache]
  offcore_response.all_data_rd.llc_miss.any_response
       [Counts all demand & prefetch data reads that hits the LLC]
  offcore_response.all_reads.llc_miss.any_response  
       [Counts all data/code/rfo reads (demand & prefetch) that hit the LLC]
  offcore_response.all_reads.llc_miss.local_dram    
       [Counts all data/code/rfo reads (demand & prefetch) that miss the LLC and the data returned from local dram]
  offcore_response.all_reads.llc_miss.remote_hit_forward
       [Counts all data/code/rfo reads (demand & prefetch) that miss the LLC and the data forwarded from remote cache]
  offcore_response.all_reads.llc_miss.remote_hitm   
       [Counts all data/code/rfo reads (demand & prefetch) that miss the LLC the data is found in M state in remote cache and forwarded from
        there]
  offcore_response.demand_code_rd.llc_miss.any_response
       [Counts all demand code reads that miss the LLC]
  offcore_response.demand_code_rd.llc_miss.local_dram
       [Counts all demand code reads that miss the LLC and the data returned from local dram]
  offcore_response.demand_code_rd.llc_miss.remote_dram
       [Counts all demand code reads that miss the LLC and the data returned from remote dram]
  offcore_response.demand_code_rd.llc_miss.remote_hit_forward
       [Counts all demand code reads that miss the LLC and the data forwarded from remote cache]
  offcore_response.demand_code_rd.llc_miss.remote_hitm
       [Counts all demand code reads that miss the LLC the data is found in M state in remote cache and forwarded from there]
  offcore_response.demand_data_rd.llc_miss.any_dram 
       [Counts demand data reads that miss the LLC and the data returned from remote & local dram]
  offcore_response.demand_data_rd.llc_miss.any_response
       [Counts demand data reads that miss in the LLC]
  offcore_response.demand_data_rd.llc_miss.local_dram
       [Counts demand data reads that miss the LLC and the data returned from local dram]
  offcore_response.demand_data_rd.llc_miss.remote_dram
       [Counts demand data reads that miss the LLC and the data returned from remote dram]
  offcore_response.demand_data_rd.llc_miss.remote_hit_forward
       [Counts demand data reads that miss the LLC and the data forwarded from remote cache]
  offcore_response.demand_data_rd.llc_miss.remote_hitm
       [Counts demand data reads that miss the LLC the data is found in M state in remote cache and forwarded from there]
  offcore_response.demand_rfo.llc_miss.remote_hitm  
       [Counts all demand data writes (RFOs) that miss the LLC and the data is found in M state in remote cache and forwarded from there]
  offcore_response.pf_l2_code_rd.llc_miss.any_response
       [Counts all prefetch (that bring data to L2) code reads that miss the LLC and the data returned from remote & local dram]
  offcore_response.pf_l2_data_rd.llc_miss.any_dram  
       [Counts prefetch (that bring data to L2) data reads that miss the LLC and the data returned from remote & local dram]
  offcore_response.pf_l2_data_rd.llc_miss.any_response
       [Counts prefetch (that bring data to L2) data reads that miss in the LLC]
  offcore_response.pf_l2_data_rd.llc_miss.local_dram
       [Counts prefetch (that bring data to L2) data reads that miss the LLC and the data returned from local dram]
  offcore_response.pf_l2_data_rd.llc_miss.remote_dram
       [Counts prefetch (that bring data to L2) data reads that miss the LLC and the data returned from remote dram]
  offcore_response.pf_l2_data_rd.llc_miss.remote_hit_forward
       [Counts prefetch (that bring data to L2) data reads that miss the LLC and the data forwarded from remote cache]
  offcore_response.pf_l2_data_rd.llc_miss.remote_hitm
       [Counts prefetch (that bring data to L2) data reads that miss the LLC the data is found in M state in remote cache and forwarded from
        there]
  offcore_response.pf_llc_code_rd.llc_miss.any_response
       [Counts all prefetch (that bring data to LLC only) code reads that miss in the LLC]
  offcore_response.pf_llc_data_rd.llc_miss.any_response
       [Counts prefetch (that bring data to LLC only) data reads that miss in the LLC]

other:
  cpl_cycles.ring0                                  
       [Unhalted core cycles when the thread is in ring 0]
  cpl_cycles.ring0_trans                            
       [Number of intervals between processor halts while thread is in ring 0]
  cpl_cycles.ring123                                
       [Unhalted core cycles when thread is in rings 1, 2, or 3]
  lock_cycles.split_lock_uc_lock_duration           
       [Cycles when L1 and L2 are locked due to UC or split lock]

pipeline:
  arith.fpu_div                                     
       [Divide operations executed]
  arith.fpu_div_active                              
       [Cycles when divider is busy executing divide operations]
  baclears.any                                      
       [Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by
        other branch handling mechanisms at the front end]
  br_inst_exec.all_branches                         
       [Speculative and retired branches]
  br_inst_exec.all_conditional                      
       [Speculative and retired macro-conditional branches]
  br_inst_exec.all_direct_jmp                       
       [Speculative and retired macro-unconditional branches excluding calls and indirects]
  br_inst_exec.all_direct_near_call                 
       [Speculative and retired direct near calls]
  br_inst_exec.all_indirect_jump_non_call_ret       
       [Speculative and retired indirect branches excluding calls and returns]
  br_inst_exec.all_indirect_near_return             
       [Speculative and retired indirect return branches]
  br_inst_exec.nontaken_conditional                 
       [Not taken macro-conditional branches]
  br_inst_exec.taken_conditional                    
       [Taken speculative and retired macro-conditional branches]
  br_inst_exec.taken_direct_jump                    
       [Taken speculative and retired macro-conditional branch instructions excluding calls and indirects]
  br_inst_exec.taken_direct_near_call               
       [Taken speculative and retired direct near calls]
  br_inst_exec.taken_indirect_jump_non_call_ret     
       [Taken speculative and retired indirect branches excluding calls and returns]
  br_inst_exec.taken_indirect_near_call             
       [Taken speculative and retired indirect calls]
  br_inst_exec.taken_indirect_near_return           
       [Taken speculative and retired indirect branches with return mnemonic]
  br_inst_retired.all_branches                      
       [All (macro) branch instructions retired]
  br_inst_retired.all_branches_pebs                 
       [All (macro) branch instructions retired (Must be precise)]
  br_inst_retired.conditional                       
       [Conditional branch instructions retired (Precise event)]
  br_inst_retired.far_branch                        
       [Far branch instructions retired]
  br_inst_retired.near_call                         
       [Direct and indirect near call instructions retired (Precise event)]
  br_inst_retired.near_call_r3                      
       [Direct and indirect macro near call instructions retired (captured in ring 3) (Precise event)]
  br_inst_retired.near_return                       
       [Return instructions retired (Precise event)]
  br_inst_retired.near_taken                        
       [Taken branch instructions retired (Precise event)]
  br_inst_retired.not_taken                         
       [Not taken branch instructions retired]
  br_misp_exec.all_branches                         
       [Speculative and retired mispredicted macro conditional branches]
  br_misp_exec.all_conditional                      
       [Speculative and retired mispredicted macro conditional branches]
  br_misp_exec.all_indirect_jump_non_call_ret       
       [Mispredicted indirect branches excluding calls and returns]
  br_misp_exec.nontaken_conditional                 
       [Not taken speculative and retired mispredicted macro conditional branches]
  br_misp_exec.taken_conditional                    
       [Taken speculative and retired mispredicted macro conditional branches]
  br_misp_exec.taken_indirect_jump_non_call_ret     
       [Taken speculative and retired mispredicted indirect branches excluding calls and returns]
  br_misp_exec.taken_indirect_near_call             
       [Taken speculative and retired mispredicted indirect calls]
  br_misp_exec.taken_return_near                    
       [Taken speculative and retired mispredicted indirect branches with return mnemonic]
  br_misp_retired.all_branches                      
       [All mispredicted macro branch instructions retired]
  br_misp_retired.all_branches_pebs                 
       [Mispredicted macro branch instructions retired (Must be precise)]
  br_misp_retired.conditional                       
       [Mispredicted conditional branch instructions retired (Precise event)]
  br_misp_retired.near_taken                        
       [number of near branch instructions retired that were mispredicted and taken (Precise event)]
  cpu_clk_thread_unhalted.one_thread_active         
       [Count XClk pulses when this thread is unhalted and the other is halted]
  cpu_clk_thread_unhalted.ref_xclk                  
       [Reference cycles when the thread is unhalted (counts at 100 MHz rate)]
  cpu_clk_thread_unhalted.ref_xclk_any              
       [Reference cycles when the at least one thread on the physical core is unhalted. (counts at 100 MHz rate)]
  cpu_clk_unhalted.one_thread_active                
       [Count XClk pulses when this thread is unhalted and the other thread is halted]
  cpu_clk_unhalted.ref_tsc                          
       [Reference cycles when the core is not in halt state]
  cpu_clk_unhalted.ref_xclk                         
       [Reference cycles when the thread is unhalted (counts at 100 MHz rate)]
  cpu_clk_unhalted.ref_xclk_any                     
       [Reference cycles when the at least one thread on the physical core is unhalted. (counts at 100 MHz rate)]
  cpu_clk_unhalted.thread                           
       [Core cycles when the thread is not in halt state]
  cpu_clk_unhalted.thread_any                       
       [Core cycles when at least one thread on the physical core is not in halt state]
  cpu_clk_unhalted.thread_p                         
       [Thread cycles when thread is not in halt state]
  cpu_clk_unhalted.thread_p_any                     
       [Core cycles when at least one thread on the physical core is not in halt state]
  cycle_activity.cycles_l1d_miss                    
       [Cycles while L1 cache miss demand load is outstanding]
  cycle_activity.cycles_l1d_pending                 
       [Cycles with pending L1 cache miss loads]
  cycle_activity.cycles_l2_miss                     
       [Cycles while L2 cache miss load* is outstanding]
  cycle_activity.cycles_l2_pending                  
       [Cycles with pending L2 cache miss loads]
  cycle_activity.cycles_ldm_pending                 
       [Cycles with pending memory loads]
  cycle_activity.cycles_mem_any                     
       [Cycles while memory subsystem has an outstanding load]
  cycle_activity.cycles_no_execute                  
       [This event increments by 1 for every cycle where there was no execute for this thread]
  cycle_activity.stalls_l1d_miss                    
       [Execution stalls while L1 cache miss demand load is outstanding]
  cycle_activity.stalls_l1d_pending                 
       [Execution stalls due to L1 data cache misses]
  cycle_activity.stalls_l2_miss                     
       [Execution stalls while L2 cache miss load* is outstanding]
  cycle_activity.stalls_l2_pending                  
       [Execution stalls due to L2 cache misses]
  cycle_activity.stalls_ldm_pending                 
       [Execution stalls due to memory subsystem]
  cycle_activity.stalls_mem_any                     
       [Execution stalls while memory subsystem has an outstanding load]
  cycle_activity.stalls_total                       
       [Total execution stalls]
  ild_stall.iq_full                                 
       [Stall cycles because IQ is full]
  ild_stall.lcp                                     
       [Stalls caused by changing prefix length of the instruction]
  inst_retired.any                                  
       [Instructions retired from execution]
  inst_retired.any_p                                
       [Number of instructions retired. General Counter - architectural event]
  inst_retired.prec_dist                            
       [Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution (Must be precise)]
  int_misc.recovery_cycles                          
       [Number of cycles waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except
        JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc.)]
  int_misc.recovery_cycles_any                      
       [Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g.
        misprediction or memory nuke)]
  int_misc.recovery_stalls_count                    
       [Number of occurences waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases
        except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc.)]
  ld_blocks.no_sr                                   
       [This event counts the number of times that split load operations are temporarily blocked because all resources for handling the split
        accesses are in use]
  ld_blocks.store_forward                           
       [Cases when loads get true Block-on-Store blocking code preventing store forwarding]
  ld_blocks_partial.address_alias                   
       [False dependencies in MOB due to partial compare on address]
  load_hit_pre.hw_pf                                
       [Not software-prefetch load dispatches that hit FB allocated for hardware prefetch]
  load_hit_pre.sw_pf                                
       [Not software-prefetch load dispatches that hit FB allocated for software prefetch]
  lsd.cycles_4_uops                                 
       [Cycles 4 Uops delivered by the LSD, but didn't come from the decoder]
  lsd.cycles_active                                 
       [Cycles Uops delivered by the LSD, but didn't come from the decoder]
  lsd.uops                                          
       [Number of Uops delivered by the LSD]
  machine_clears.count                              
       [Number of machine clears (nukes) of any type]
  machine_clears.maskmov                            
       [This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set
        to 0]
  machine_clears.smc                                
       [Self-modifying code (SMC) detected]
  move_elimination.int_eliminated                   
       [Number of integer Move Elimination candidate uops that were eliminated]
  move_elimination.int_not_eliminated               
       [Number of integer Move Elimination candidate uops that were not eliminated]
  move_elimination.simd_eliminated                  
       [Number of SIMD Move Elimination candidate uops that were eliminated]
  move_elimination.simd_not_eliminated              
       [Number of SIMD Move Elimination candidate uops that were not eliminated]
  other_assists.any_wb_assist                       
       [Number of times any microcode assist is invoked by HW upon uop writeback]
  resource_stalls.any                               
       [Resource-related stall cycles]
  resource_stalls.rob                               
       [Cycles stalled due to re-order buffer full]
  resource_stalls.rs                                
       [Cycles stalled due to no eligible RS entry available]
  resource_stalls.sb                                
       [Cycles stalled due to no store buffers available. (not including draining form sync)]
  rob_misc_events.lbr_inserts                       
       [Count cases of saving new LBR]
  rs_events.empty_cycles                            
       [Cycles when Reservation Station (RS) is empty for the thread]
  rs_events.empty_end                               
       [Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues]
  uops_dispatched_port.port_0                       
       [Cycles per thread when uops are dispatched to port 0]
  uops_dispatched_port.port_0_core                  
       [Cycles per core when uops are dispatched to port 0]
  uops_dispatched_port.port_1                       
       [Cycles per thread when uops are dispatched to port 1]
  uops_dispatched_port.port_1_core                  
       [Cycles per core when uops are dispatched to port 1]
  uops_dispatched_port.port_2                       
       [Cycles per thread when load or STA uops are dispatched to port 2]
  uops_dispatched_port.port_2_core                  
       [Uops dispatched to port 2, loads and stores per core (speculative and retired)]
  uops_dispatched_port.port_3                       
       [Cycles per thread when load or STA uops are dispatched to port 3]
  uops_dispatched_port.port_3_core                  
       [Cycles per core when load or STA uops are dispatched to port 3]
  uops_dispatched_port.port_4                       
       [Cycles per thread when uops are dispatched to port 4]
  uops_dispatched_port.port_4_core                  
       [Cycles per core when uops are dispatched to port 4]
  uops_dispatched_port.port_5                       
       [Cycles per thread when uops are dispatched to port 5]
  uops_dispatched_port.port_5_core                  
       [Cycles per core when uops are dispatched to port 5]
  uops_executed.core                                
       [Number of uops executed on the core]
  uops_executed.core_cycles_ge_1                    
       [Cycles at least 1 micro-op is executed from any thread on physical core]
  uops_executed.core_cycles_ge_2                    
       [Cycles at least 2 micro-op is executed from any thread on physical core]
  uops_executed.core_cycles_ge_3                    
       [Cycles at least 3 micro-op is executed from any thread on physical core]
  uops_executed.core_cycles_ge_4                    
       [Cycles at least 4 micro-op is executed from any thread on physical core]
  uops_executed.core_cycles_none                    
       [Cycles with no micro-ops executed from any thread on physical core]
  uops_executed.cycles_ge_1_uop_exec                
       [Cycles where at least 1 uop was executed per-thread]
  uops_executed.cycles_ge_2_uops_exec               
       [Cycles where at least 2 uops were executed per-thread]
  uops_executed.cycles_ge_3_uops_exec               
       [Cycles where at least 3 uops were executed per-thread]
  uops_executed.cycles_ge_4_uops_exec               
       [Cycles where at least 4 uops were executed per-thread]
  uops_executed.stall_cycles                        
       [Counts number of cycles no uops were dispatched to be executed on this thread]
  uops_executed.thread                              
       [Counts the number of uops to be executed per-thread each cycle]
  uops_issued.any                                   
       [Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)]
  uops_issued.core_stall_cycles                     
       [Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threads]
  uops_issued.flags_merge                           
       [Number of flags-merge uops being allocated]
  uops_issued.single_mul                            
       [Number of Multiply packed/scalar single precision uops allocated]
  uops_issued.slow_lea                              
       [Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate)
        regardless if as a result of LEA instruction or not]
  uops_issued.stall_cycles                          
       [Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread]
  uops_retired.all                                  
       [Retired uops (Precise event)]
  uops_retired.core_stall_cycles                    
       [Cycles without actually retired uops]
  uops_retired.retire_slots                         
       [Retirement slots used (Precise event)]
  uops_retired.stall_cycles                         
       [Cycles without actually retired uops]
  uops_retired.total_cycles                         
       [Cycles with less than 10 actually retired uops]

uncore cache:
  llc_misses.code_llc_prefetch                      
       [LLC prefetch misses for code reads. Derived from unc_c_tor_inserts.miss_opcode.code. Unit: uncore_cbox]
  llc_misses.data_llc_prefetch                      
       [LLC prefetch misses for data reads. Derived from unc_c_tor_inserts.miss_opcode.data_read. Unit: uncore_cbox]
  llc_misses.data_read                              
       [LLC misses - demand and prefetch data reads - excludes LLC prefetches. Derived from unc_c_tor_inserts.miss_opcode.demand. Unit:
        uncore_cbox]
  llc_misses.itom_write                             
       [LLC misses for ItoM writes (as part of fast string memcpy stores). Derived from unc_c_tor_inserts.miss_opcode.itom_write. Unit:
        uncore_cbox]
  llc_misses.pcie_non_snoop_read                    
       [LLC misses for PCIe non-snoop reads. Derived from unc_c_tor_inserts.miss_opcode.pcie_read. Unit: uncore_cbox]
  llc_misses.pcie_non_snoop_write                   
       [LLC misses for PCIe non-snoop writes (full line). Derived from unc_c_tor_inserts.miss_opcode.pcie_write. Unit: uncore_cbox]
  llc_misses.pcie_read                              
       [LLC misses for PCIe read current. Derived from unc_c_tor_inserts.miss_opcode.pcie_read. Unit: uncore_cbox]
  llc_misses.pcie_write                             
       [PCIe allocating writes that miss LLC - DDIO misses. Derived from unc_c_tor_inserts.miss_opcode.ddio_miss. Unit: uncore_cbox]
  llc_misses.rfo_llc_prefetch                       
       [LLC prefetch misses for RFO. Derived from unc_c_tor_inserts.miss_opcode.rfo_prefetch. Unit: uncore_cbox]
  llc_misses.uncacheable                            
       [LLC misses - Uncacheable reads. Derived from unc_c_tor_inserts.miss_opcode.uncacheable. Unit: uncore_cbox]
  llc_references.itom_write                         
       [ItoM write hits (as part of fast string memcpy stores). Derived from unc_c_tor_inserts.opcode.itom_write_hit. Unit: uncore_cbox]
  llc_references.pcie_ns_partial_write              
       [PCIe non-snoop writes (partial). Derived from unc_c_tor_inserts.opcode.pcie_partial_write. Unit: uncore_cbox]
  llc_references.pcie_ns_read                       
       [PCIe non-snoop reads. Derived from unc_c_tor_inserts.opcode.pcie_read. Unit: uncore_cbox]
  llc_references.pcie_ns_write                      
       [PCIe non-snoop writes (full line). Derived from unc_c_tor_inserts.opcode.pcie_full_write. Unit: uncore_cbox]
  llc_references.pcie_partial_read                  
       [Partial PCIe reads. Derived from unc_c_tor_inserts.opcode.pcie_partial. Unit: uncore_cbox]
  llc_references.pcie_read                          
       [PCIe read current. Derived from unc_c_tor_inserts.opcode.pcie_read_current. Unit: uncore_cbox]
  llc_references.pcie_write                         
       [PCIe allocating writes that hit in LLC (DDIO hits). Derived from unc_c_tor_inserts.opcode.ddio_hit. Unit: uncore_cbox]
  llc_references.streaming_full                     
       [Streaming stores (full cache line). Derived from unc_c_tor_inserts.opcode.streaming_full. Unit: uncore_cbox]
  llc_references.streaming_partial                  
       [Streaming stores (partial cache line). Derived from unc_c_tor_inserts.opcode.streaming_partial. Unit: uncore_cbox]
  unc_c_clockticks                                  
       [Uncore cache clock ticks. Unit: uncore_cbox]
  unc_c_llc_lookup.any                              
       [All LLC Misses (code+ data rd + data wr - including demand and prefetch). Unit: uncore_cbox]
  unc_c_llc_victims.m_state                         
       [M line evictions from LLC (writebacks to memory). Unit: uncore_cbox]
  unc_c_tor_occupancy.llc_data_read                 
       [Occupancy counter for LLC data reads (demand and L2 prefetch). Derived from unc_c_tor_occupancy.miss_opcode.llc_data_read. Unit:
        uncore_cbox]
  unc_c_tor_occupancy.miss_local                    
       [Occupancy for all LLC misses that are addressed to local memory. Unit: uncore_cbox]
  unc_c_tor_occupancy.miss_remote                   
       [Occupancy for all LLC misses that are addressed to remote memory. Unit: uncore_cbox]
  unc_h_requests.reads                              
       [Read requests to home agent. Unit: uncore_ha]
  unc_h_requests.writes                             
       [Write requests to home agent. Unit: uncore_ha]
  unc_h_snoop_resp.rsp_fwd_wb                       
       [M line forwarded from remote cache along with writeback to memory. Unit: uncore_ha]
  unc_h_snoop_resp.rspifwd                          
       [M line forwarded from remote cache with no writeback to memory. Unit: uncore_ha]
  unc_h_snoop_resp.rsps                             
       [Shared line response from remote cache. Unit: uncore_ha]
  unc_h_snoop_resp.rspsfwd                          
       [Shared line forwarded from remote cache. Unit: uncore_ha]

uncore memory:
  llc_misses.mem_read                               
       [Read requests to memory controller. Derived from unc_m_cas_count.rd. Unit: uncore_imc]
  llc_misses.mem_write                              
       [Write requests to memory controller. Derived from unc_m_cas_count.wr. Unit: uncore_imc]
  unc_m_act_count.rd                                
       [Memory page activates for reads and writes. Unit: uncore_imc]
  unc_m_clockticks                                  
       [Memory controller clock ticks. Use to generate percentages for memory controller CYCLES events. Unit: uncore_imc]
  unc_m_power_channel_ppd                           
       [Cycles where DRAM ranks are in power down (CKE) mode. Unit: uncore_imc]
  unc_m_power_critical_throttle_cycles              
       [Cycles all ranks are in critical thermal throttle. Unit: uncore_imc]
  unc_m_power_self_refresh                          
       [Cycles Memory is in self refresh power mode. Unit: uncore_imc]
  unc_m_pre_count.page_miss                         
       [Memory page conflicts. Unit: uncore_imc]

uncore power:
  unc_p_clockticks                                  
       [PCU clock ticks. Use to get percentages of PCU cycles events. Unit: uncore_pcu]
  unc_p_freq_band0_cycles                           
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band0=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Unit: uncore_pcu]
  unc_p_freq_band0_transitions                      
       [Counts the number of times that the uncore transitioned a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band0=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Derived from unc_p_freq_band0_cycles. Unit: uncore_pcu]
  unc_p_freq_band1_cycles                           
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band1=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Unit: uncore_pcu]
  unc_p_freq_band1_transitions                      
       [Counts the number of times that the uncore transitioned to a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band1=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Derived from unc_p_freq_band1_cycles. Unit: uncore_pcu]
  unc_p_freq_band2_cycles                           
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band2=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Unit: uncore_pcu]
  unc_p_freq_band2_transitions                      
       [Counts the number of cycles that the uncore transitioned to a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band2=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Derived from unc_p_freq_band2_cycles. Unit: uncore_pcu]
  unc_p_freq_band3_cycles                           
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band3=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Unit: uncore_pcu]
  unc_p_freq_band3_transitions                      
       [Counts the number of cycles that the uncore transitioned to a frequency greater than or equal to the frequency that is configured in the
        filter. (filter_band3=XXX, with XXX in 100Mhz units). One can also use inversion (filter_inv=1) to track cycles when we were less than
        the configured frequency. Derived from unc_p_freq_band3_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_1200mhz_cycles                      
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to 1.2Ghz. Derived from
        unc_p_freq_band0_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_1200mhz_transitions                 
       [Counts the number of times that the uncore transitioned to a frequency greater than or equal to 1.2Ghz. Derived from
        unc_p_freq_band0_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_2000mhz_cycles                      
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to 2Ghz. Derived from
        unc_p_freq_band1_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_2000mhz_transitions                 
       [Counts the number of times that the uncore transitioned to a frequency greater than or equal to 2Ghz. Derived from
        unc_p_freq_band1_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_3000mhz_cycles                      
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to 3Ghz. Derived from
        unc_p_freq_band2_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_3000mhz_transitions                 
       [Counts the number of cycles that the uncore transitioned to a frequency greater than or equal to 3Ghz. Derived from
        unc_p_freq_band2_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_4000mhz_cycles                      
       [Counts the number of cycles that the uncore was running at a frequency greater than or equal to 4Ghz. Derived from
        unc_p_freq_band3_cycles. Unit: uncore_pcu]
  unc_p_freq_ge_4000mhz_transitions                 
       [Counts the number of cycles that the uncore transitioned to a frequency greater than or equal to 4Ghz. Derived from
        unc_p_freq_band3_cycles. Unit: uncore_pcu]
  unc_p_freq_max_current_cycles                     
       [Counts the number of cycles when current is the upper limit on frequency. Unit: uncore_pcu]
  unc_p_freq_max_limit_thermal_cycles               
       [Counts the number of cycles when temperature is the upper limit on frequency. Unit: uncore_pcu]
  unc_p_freq_max_os_cycles                          
       [Counts the number of cycles when the OS is the upper limit on frequency. Unit: uncore_pcu]
  unc_p_freq_max_power_cycles                       
       [Counts the number of cycles when power is the upper limit on frequency. Unit: uncore_pcu]
  unc_p_freq_trans_cycles                           
       [Cycles spent changing Frequency. Unit: uncore_pcu]
  unc_p_power_state_occupancy.cores_c0              
       [This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in
        C0, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. Unit:
        uncore_pcu]
  unc_p_power_state_occupancy.cores_c3              
       [This is an occupancy event that tracks the number of cores that are in C3. It can be used by itself to get the average number of cores in
        C0, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. Unit:
        uncore_pcu]
  unc_p_power_state_occupancy.cores_c6              
       [This is an occupancy event that tracks the number of cores that are in C6. It can be used by itself to get the average number of cores in
        C0, with threshholding to generate histograms, or with other PCU events . Unit: uncore_pcu]
  unc_p_prochot_external_cycles                     
       [Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that
        something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip. Unit: uncore_pcu]

virtual memory:
  dtlb_load_misses.demand_ld_walk_completed         
       [Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size]
  dtlb_load_misses.demand_ld_walk_duration          
       [Demand load cycles page miss handler (PMH) is busy with this walk]
  dtlb_load_misses.large_page_walk_completed        
       [Page walk for a large page completed for Demand load]
  dtlb_load_misses.miss_causes_a_walk               
       [Demand load Miss in all translation lookaside buffer (TLB) levels causes an page walk of any page size]
  dtlb_load_misses.stlb_hit                         
       [Load operations that miss the first DTLB level but hit the second and do not cause page walks]
  dtlb_load_misses.walk_completed                   
       [Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size]
  dtlb_load_misses.walk_duration                    
       [Demand load cycles page miss handler (PMH) is busy with this walk]
  dtlb_store_misses.miss_causes_a_walk              
       [Store misses in all DTLB levels that cause page walks]
  dtlb_store_misses.stlb_hit                        
       [Store operations that miss the first TLB level but hit the second and do not cause page walks]
  dtlb_store_misses.walk_completed                  
       [Store misses in all DTLB levels that cause completed page walks]
  dtlb_store_misses.walk_duration                   
       [Cycles when PMH is busy with page walks]
  ept.walk_cycles                                   
       [Cycle count for an Extended Page table walk. The Extended Page Directory cache is used by Virtual Machine operating systems while the
        guest operating systems use the standard TLB caches]
  itlb.itlb_flush                                   
       [Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages]
  itlb_misses.large_page_walk_completed             
       [Completed page walks in ITLB due to STLB load misses for large pages]
  itlb_misses.miss_causes_a_walk                    
       [Misses at all ITLB levels that cause page walks]
  itlb_misses.stlb_hit                              
       [Operations that miss the first ITLB level but hit the second and do not cause any page walks]
  itlb_misses.walk_completed                        
       [Misses in all ITLB levels that cause completed page walks]
  itlb_misses.walk_duration                         
       [Cycles when PMH is busy with page walks]
  tlb_flush.dtlb_thread                             
       [DTLB flush attempts of the thread-specific entries]
  tlb_flush.stlb_any                                
       [STLB flush attempts]

  duration_time                                      [Tool event]

Код: Выделить всё

olej@R420:~$ perf --help

 usage: perf [--version] [--help] [OPTIONS] COMMAND [ARGS]

 The most commonly used perf commands are:
   annotate        Read perf.data (created by perf record) and display annotated code
   archive         Create archive with object files with build-ids found in perf.data file
   bench           General framework for benchmark suites
   buildid-cache   Manage build-id cache.
   buildid-list    List the buildids in a perf.data file
   c2c             Shared Data C2C/HITM Analyzer.
   config          Get and set variables in a configuration file.
   daemon          Run record sessions on background
   data            Data file related processing
   diff            Read perf.data files and display the differential profile
   evlist          List the event names in a perf.data file
   ftrace          simple wrapper for kernel's ftrace functionality
   inject          Filter to augment the events stream with additional information
   iostat          Show I/O performance metrics
   kallsyms        Searches running kernel for symbols
   kmem            Tool to trace/measure kernel memory properties
   kvm             Tool to trace/measure kvm guest os
   list            List all symbolic event types
   lock            Analyze lock events
   mem             Profile memory accesses
   record          Run a command and record its profile into perf.data
   report          Read perf.data (created by perf record) and display the profile
   sched           Tool to trace/measure scheduler properties (latencies)
   script          Read perf.data (created by perf record) and display trace output
   stat            Run a command and gather performance counter statistics
   test            Runs sanity tests.
   timechart       Tool to visualize total system behavior during a workload
   top             System profiling tool.
   version         display the version of perf binary
   probe           Define new dynamic tracepoints
   trace           strace inspired tool

 See 'perf help COMMAND' for more information on a specific command.
Механизмы профилирования Linux
Поэтому в perf сейчас пихают всё, что можно: от tracepoint’ов до eBPF и вплоть до того, что весь ftrace хотят сделать частью perf.

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 23 июл 2023, 11:46

Шикарная книга по BPF - для свободного скачивания - BPF для мониторинга Linux:
Изображение
Дэвид Калавера, Лоренцо Фонтана
BPF для мониторинга Linux
ISBN: 978-5-4461-1624-9
208 страниц
июль 2020
Питер
Краткое содержание
Вступление.............................................................................................................................................. 11
Предисловие........................................................................................................................................... 14
От.издательства..................................................................................................................................... 19
Глава 1..Введение................................................................................................................................. 20
Глава 2..Запуск.программ.BPF........................................................................................................ 27
Глава 3..Карты.BPF............................................................................................................................. 44
Глава 4..Трассировка.с.помощью.BPF.......................................................................................... 77
Глава 5..Утилиты.BPF......................................................................................................................108
Глава 6..Сетевое.взаимодействие.в.Linux.и.BPF.....................................................................131
Глава 7..Express.Data.Path...............................................................................................................158
Глава 8. Безопасность.ядра.Linux,.его.возможности.и.Seccomp........................................185
Глава 9..Реальные.способы.применения....................................................................................199
Об.авторах.............................................................................................................................................206
Об.обложке............................................................................................................................................207
Аннотация к книге "BPF для мониторинга Linux"
Виртуальная машина BPF — один из важнейших компонентов ядра Linux. Её грамотное применение позволит системным инженерам находить сбои и решать даже самые сложные проблемы.
Вы научитесь создавать программы, отслеживающие и модифицирующие поведение ядра, сможете безопасно внедрять код для наблюдения событий в ядре и многое другое.
Дэвид Калавера и Лоренцо Фонтана помогут вам раскрыть возможности BPF. Расширьте свои знания об оптимизации производительности, сетях, безопасности.
- Используйте BPF для отслеживания и модификации поведения ядра Linux.
- Внедряйте код для безопасного мониторинга событий в ядре — без необходимости перекомпилировать ядро или перезагружать систему.
- Пользуйтесь удобными примерами кода на C, Go или Python.
- Управляйте ситуацией, владея жизненным циклом программы BPF.

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 23 июл 2023, 13:13

Olej писал(а):
23 июл 2023, 11:46
книга по BPF
(Самые ключевые понятия выделил - это я :lol: )
В 1992 году Стивен Маккейн и Ван Якобсон опубликовали статью The BSD Packet Filter: A New Architecture for User-Level Packet Capture («Пакетный фильтр BSD: новая архитектура для захвата пакетов на уровне пользователя»). В ней они описали способ реализации фильтра сетевых пакетов для ядра Unix, который работал в 20 раз быстрее, чем все остальные, имеющиеся на то время в области фильтрации пакетов.
BPF представил два серьезных нововведения в области фильтрации пакетов:
- новую виртуальную машину (ВМ), предназначенную для эффективной работы с ЦП на основе регистров;
- возможность использования буферов для каждого приложения, способных фильтровать пакеты без копирования всей информации о них. Это минимизировало количество данных BPF, необходимых для принятия решений.
В начале 2014 года Алексей Старовойтов разработал расширенную реализацию BPF. Новый подход был оптимизирован для современного оборудо
вания, благодаря чему его результирующий набор команд работает быстрее, чем машинный код, сгенерированный старым интерпретатором BPF. Расширенная версия также увеличила число регистров в виртуальной машине BPF с двух 32-битных регистров до десяти 64-битных.
BPF-программы стали больше походить на модули ядра с сильным акцентом на безопасности и стабильности. В отличие от обычных
модулей ядра, BPF-программы не требуют его перекомпиляции и гарантированно завершаются без сбоев.
Одновременно с изменениями, сделавшими BPF доступным из пространства пользователя, разработчики ядра добавили новый системный вызов (syscall) — bpf. Он станет центральным элементом связи между пользовательским пространством и ядром.

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 23 июл 2023, 13:23

Olej писал(а):
23 июл 2023, 13:13
добавили новый системный вызов (syscall) — bpf. Он станет центральным элементом связи между пользовательским пространством и ядром.

Код: Выделить всё

olej@R420:~$ man 2 bpf

BPF(2)                                                  Linux Programmer's Manual                                                  BPF(2)

NAME
       bpf - perform a command on an extended BPF map or program

SYNOPSIS
       #include <linux/bpf.h>

       int bpf(int cmd, union bpf_attr *attr, unsigned int size);

DESCRIPTION
       The bpf() system call performs a range of operations related to extended Berkeley Packet Filters.  Extended BPF (or eBPF) is simi‐
       lar to the original ("classic") BPF (cBPF) used to filter network packets.  For both cBPF and eBPF programs, the kernel statically
       analyzes the programs before loading them, in order to ensure that they cannot harm the running system.

       eBPF  extends cBPF in multiple ways, including the ability to call a fixed set of in-kernel helper functions (via the BPF_CALL op‐
       code extension provided by eBPF) and access shared data structures such as eBPF maps.
...

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 23 июл 2023, 13:28

Olej писал(а):
23 июл 2023, 13:13
книга по BPF
Как мы говорили ранее, BPF — это высокоразвитая виртуальная машина, выполняющая инструкции кода в изолированной среде. В определенном смысле вы можете думать о BPF как о виртуальной машине Java (JVM) — специализированной программе, которая выполняет машинный код, скомпилированный из исходного кода на языке программирования высокого уровня. Компиляторы, такие как LLVM и GNU Compiler Collection (GCC), в ближайшем будущем обеспечат поддержку BPF, что позволит вам компилировать код C в инструкции BPF. После того как код скомпилирован, BPF использует верификатор, чтобы убедиться, что программа безопасна для запуска в пространстве ядра. Он не позволяет запускать код, который может представлять угрозу для вашей системы из-за сбоя ядра.
Это написано в 2020 году ... по поводу "в ближайшем будущем обеспечат поддержку BPF" - то на сегодня это уже так: компиляторы и Clang и GCC уже обеспечивают такую возможность.

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 23 июл 2023, 15:15

Olej писал(а):
23 июл 2023, 13:28
Это написано в 2020 году ...
Все примеры кода к книге - здесь:

Код: Выделить всё

olej@R420:~/2023/own.BOOKs/eBPF/kalavera2021bpf$ git clone https://github.com/bpftools/linux-observability-with-bpf
Клонирование в «linux-observability-with-bpf»...
remote: Enumerating objects: 401, done.
remote: Counting objects: 100% (398/398), done.
remote: Compressing objects: 100% (185/185), done.
remote: Total 401 (delta 175), reused 371 (delta 169), pack-reused 3
Получение объектов: 100% (401/401), 200.29 КиБ | 1.44 МиБ/с, готово.
Определение изменений: 100% (175/175), готово.
Code snippets from the O'Reilly book

Код: Выделить всё

olej@R420:~/2023/own.BOOKs/eBPF$ ls linux-observability-with-bpf/code -l
итого 24
drwxrwxr-x  3 olej olej 4096 июл 23 15:09 chapter-2
drwxrwxr-x  3 olej olej 4096 июл 23 15:09 chapter-3
drwxrwxr-x 11 olej olej 4096 июл 23 15:09 chapter-4
drwxrwxr-x  4 olej olej 4096 июл 23 15:09 chapter-6
drwxrwxr-x  5 olej olej 4096 июл 23 15:09 chapter-7
drwxrwxr-x  3 olej olej 4096 июл 23 15:09 chapter-8
С предуведомлением в README.md
Important note for readers (Jan 30th 2022)
This repository is now archived, this book was published in 2019 and written in 2018. We have been trying to keep the repository up-to-date until now but eBPF had a tremendous evolution in the past 3 years. This does not mean that reading the book is a complete waste of your time now, many concepts are always the same: like how the bpf syscall works, the instruction set and things like how tracepoints, kprobes, uprobes, xdp and traffic control works. However, at this point, just updating the examples here is not enough anymore and many areas of the book would need to be rewritten to fit the new concepts, tools, libraries and the ecosystem around eBPF.
Т.е., если книга написана в 2018г., то примеры кодов переработана на июнь 2022г., т.е. всего годичной давности.

Аватара пользователя
Olej
Писатель
Сообщения: 21338
Зарегистрирован: 24 сен 2011, 14:22
Откуда: Харьков
Контактная информация:

Berkeley Packet Filter (BPF)

Непрочитанное сообщение Olej » 23 июл 2023, 16:42

Olej писал(а):
23 июл 2023, 11:46
Шикарная книга по BPF - для свободного скачивания
Эту же книгу найдёте на Флибуста, здесь (читать, скачать): BPF для мониторинга Linux (pdf)
Изображение
издание 2021 г.
Добавлена: 26.10.2020

Ответить

Вернуться в «Linux изнутри»

Кто сейчас на конференции

Сейчас этот форум просматривают: нет зарегистрированных пользователей и 10 гостей