diff options
284 files changed, 7129 insertions, 2282 deletions
diff --git a/Documentation/networking/devlink/devlink-trap.rst b/Documentation/networking/devlink/devlink-trap.rst index fe089acb7783..1e3f3ffee248 100644 --- a/Documentation/networking/devlink/devlink-trap.rst +++ b/Documentation/networking/devlink/devlink-trap.rst @@ -55,7 +55,7 @@ The following diagram provides a general overview of ``devlink-trap``:: | | +-------^--------+ | - | + | Non-control traps | +----+----+ | | Kernel's Rx path @@ -97,6 +97,12 @@ The ``devlink-trap`` mechanism supports the following packet trap types: processed by ``devlink`` and injected to the kernel's Rx path. Changing the action of such traps is not allowed, as it can easily break the control plane. + * ``control``: Trapped packets were trapped by the device because these are + control packets required for the correct functioning of the control plane. + For example, ARP request and IGMP query packets. Packets are injected to + the kernel's Rx path, but not reported to the kernel's drop monitor. + Changing the action of such traps is not allowed, as it can easily break + the control plane. .. _Trap-Actions: @@ -108,6 +114,8 @@ The ``devlink-trap`` mechanism supports the following packet trap actions: * ``trap``: The sole copy of the packet is sent to the CPU. * ``drop``: The packet is dropped by the underlying device and a copy is not sent to the CPU. + * ``mirror``: The packet is forwarded by the underlying device and a copy is + sent to the CPU. Generic Packet Traps ==================== @@ -244,6 +252,159 @@ be added to the following table: * - ``egress_flow_action_drop`` - ``drop`` - Traps packets dropped during processing of egress flow action drop + * - ``stp`` + - ``control`` + - Traps STP packets + * - ``lacp`` + - ``control`` + - Traps LACP packets + * - ``lldp`` + - ``control`` + - Traps LLDP packets + * - ``igmp_query`` + - ``control`` + - Traps IGMP Membership Query packets + * - ``igmp_v1_report`` + - ``control`` + - Traps IGMP Version 1 Membership Report packets + * - ``igmp_v2_report`` + - ``control`` + - Traps IGMP Version 2 Membership Report packets + * - ``igmp_v3_report`` + - ``control`` + - Traps IGMP Version 3 Membership Report packets + * - ``igmp_v2_leave`` + - ``control`` + - Traps IGMP Version 2 Leave Group packets + * - ``mld_query`` + - ``control`` + - Traps MLD Multicast Listener Query packets + * - ``mld_v1_report`` + - ``control`` + - Traps MLD Version 1 Multicast Listener Report packets + * - ``mld_v2_report`` + - ``control`` + - Traps MLD Version 2 Multicast Listener Report packets + * - ``mld_v1_done`` + - ``control`` + - Traps MLD Version 1 Multicast Listener Done packets + * - ``ipv4_dhcp`` + - ``control`` + - Traps IPv4 DHCP packets + * - ``ipv6_dhcp`` + - ``control`` + - Traps IPv6 DHCP packets + * - ``arp_request`` + - ``control`` + - Traps ARP request packets + * - ``arp_response`` + - ``control`` + - Traps ARP response packets + * - ``arp_overlay`` + - ``control`` + - Traps NVE-decapsulated ARP packets that reached the overlay network. + This is required, for example, when the address that needs to be + resolved is a local address + * - ``ipv6_neigh_solicit`` + - ``control`` + - Traps IPv6 Neighbour Solicitation packets + * - ``ipv6_neigh_advert`` + - ``control`` + - Traps IPv6 Neighbour Advertisement packets + * - ``ipv4_bfd`` + - ``control`` + - Traps IPv4 BFD packets + * - ``ipv6_bfd`` + - ``control`` + - Traps IPv6 BFD packets + * - ``ipv4_ospf`` + - ``control`` + - Traps IPv4 OSPF packets + * - ``ipv6_ospf`` + - ``control`` + - Traps IPv6 OSPF packets + * - ``ipv4_bgp`` + - ``control`` + - Traps IPv4 BGP packets + * - ``ipv6_bgp`` + - ``control`` + - Traps IPv6 BGP packets + * - ``ipv4_vrrp`` + - ``control`` + - Traps IPv4 VRRP packets + * - ``ipv6_vrrp`` + - ``control`` + - Traps IPv6 VRRP packets + * - ``ipv4_pim`` + - ``control`` + - Traps IPv4 PIM packets + * - ``ipv6_pim`` + - ``control`` + - Traps IPv6 PIM packets + * - ``uc_loopback`` + - ``control`` + - Traps unicast packets that need to be routed through the same layer 3 + interface from which they were received. Such packets are routed by the + kernel, but also cause it to potentially generate ICMP redirect packets + * - ``local_route`` + - ``control`` + - Traps unicast packets that hit a local route and need to be locally + delivered + * - ``external_route`` + - ``control`` + - Traps packets that should be routed through an external interface (e.g., + management interface) that does not belong to the same device (e.g., + switch ASIC) as the ingress interface + * - ``ipv6_uc_dip_link_local_scope`` + - ``control`` + - Traps unicast IPv6 packets that need to be routed and have a destination + IP address with a link-local scope (i.e., fe80::/10). The trap allows + device drivers to avoid programming link-local routes, but still receive + packets for local delivery + * - ``ipv6_dip_all_nodes`` + - ``control`` + - Traps IPv6 packets that their destination IP address is the "All Nodes + Address" (i.e., ff02::1) + * - ``ipv6_dip_all_routers`` + - ``control`` + - Traps IPv6 packets that their destination IP address is the "All Routers + Address" (i.e., ff02::2) + * - ``ipv6_router_solicit`` + - ``control`` + - Traps IPv6 Router Solicitation packets + * - ``ipv6_router_advert`` + - ``control`` + - Traps IPv6 Router Advertisement packets + * - ``ipv6_redirect`` + - ``control`` + - Traps IPv6 Redirect Message packets + * - ``ipv4_router_alert`` + - ``control`` + - Traps IPv4 packets that need to be routed and include the Router Alert + option. Such packets need to be locally delivered to raw sockets that + have the IP_ROUTER_ALERT socket option set + * - ``ipv6_router_alert`` + - ``control`` + - Traps IPv6 packets that need to be routed and include the Router Alert + option in their Hop-by-Hop extension header. Such packets need to be + locally delivered to raw sockets that have the IPV6_ROUTER_ALERT socket + option set + * - ``ptp_event`` + - ``control`` + - Traps PTP time-critical event messages (Sync, Delay_req, Pdelay_Req and + Pdelay_Resp) + * - ``ptp_general`` + - ``control`` + - Traps PTP general messages (Announce, Follow_Up, Delay_Resp, + Pdelay_Resp_Follow_Up, management and signaling) + * - ``flow_action_sample`` + - ``control`` + - Traps packets sampled during processing of flow action sample (e.g., via + tc's sample action) + * - ``flow_action_trap`` + - ``control`` + - Traps packets logged during processing of flow action trap (e.g., via + tc's trap action) Driver-specific Packet Traps ============================ @@ -277,8 +438,11 @@ narrow. The description of these groups must be added to the following table: - Contains packet traps for packets that were dropped by the device during layer 2 forwarding (i.e., bridge) * - ``l3_drops`` - - Contains packet traps for packets that were dropped by the device or hit - an exception (e.g., TTL error) during layer 3 forwarding + - Contains packet traps for packets that were dropped by the device during + layer 3 forwarding + * - ``l3_exceptions`` + - Contains packet traps for packets that hit an exception (e.g., TTL + error) during layer 3 forwarding * - ``buffer_drops`` - Contains packet traps for packets that were dropped by the device due to an enqueue decision @@ -288,6 +452,55 @@ narrow. The description of these groups must be added to the following table: * - ``acl_drops`` - Contains packet traps for packets that were dropped by the device during ACL processing + * - ``stp`` + - Contains packet traps for STP packets + * - ``lacp`` + - Contains packet traps for LACP packets + * - ``lldp`` + - Contains packet traps for LLDP packets + * - ``mc_snooping`` + - Contains packet traps for IGMP and MLD packets required for multicast + snooping + * - ``dhcp`` + - Contains packet traps for DHCP packets + * - ``neigh_discovery`` + - Contains packet traps for neighbour discovery packets (e.g., ARP, IPv6 + ND) + * - ``bfd`` + - Contains packet traps for BFD packets + * - ``ospf`` + - Contains packet traps for OSPF packets + * - ``bgp`` + - Contains packet traps for BGP packets + * - ``vrrp`` + - Contains packet traps for VRRP packets + * - ``pim`` + - Contains packet traps for PIM packets + * - ``uc_loopback`` + - Contains a packet trap for unicast loopback packets (i.e., + ``uc_loopback``). This trap is singled-out because in cases such as + one-armed router it will be constantly triggered. To limit the impact on + the CPU usage, a packet trap policer with a low rate can be bound to the + group without affecting other traps + * - ``local_delivery`` + - Contains packet traps for packets that should be locally delivered after + routing, but do not match more specific packet traps (e.g., + ``ipv4_bgp``) + * - ``ipv6`` + - Contains packet traps for various IPv6 control packets (e.g., Router + Advertisements) + * - ``ptp_event`` + - Contains packet traps for PTP time-critical event messages (Sync, + Delay_req, Pdelay_Req and Pdelay_Resp) + * - ``ptp_general`` + - Contains packet traps for PTP general messages (Announce, Follow_Up, + Delay_Resp, Pdelay_Resp_Follow_Up, management and signaling) + * - ``acl_sample`` + - Contains packet traps for packets that were sampled by the device during + ACL processing + * - ``acl_trap`` + - Contains packet traps for packets that were trapped (logged) by the + device during ACL processing Packet Trap Policers ==================== diff --git a/Documentation/process/coding-style.rst b/Documentation/process/coding-style.rst index acb2f1b36350..17a8e584f15f 100644 --- a/Documentation/process/coding-style.rst +++ b/Documentation/process/coding-style.rst @@ -84,15 +84,20 @@ Get a decent editor and don't leave whitespace at the end of lines. Coding style is all about readability and maintainability using commonly available tools. -The limit on the length of lines is 80 columns and this is a strongly -preferred limit. - -Statements longer than 80 columns will be broken into sensible chunks, unless -exceeding 80 columns significantly increases readability and does not hide -information. Descendants are always substantially shorter than the parent and -are placed substantially to the right. The same applies to function headers -with a long argument list. However, never break user-visible strings such as -printk messages, because that breaks the ability to grep for them. +The preferred limit on the length of a single line is 80 columns. + +Statements longer than 80 columns should be broken into sensible chunks, +unless exceeding 80 columns significantly increases readability and does +not hide information. + +Descendants are always substantially shorter than the parent and are +are placed substantially to the right. A very commonly used style +is to align descendants to a function open parenthesis. + +These same rules are applied to function headers with a long argument list. + +However, never break user-visible strings such as printk messages because +that breaks the ability to grep for them. 3) Placing Braces and Spaces @@ -2,7 +2,7 @@ VERSION = 5 PATCHLEVEL = 7 SUBLEVEL = 0 -EXTRAVERSION = -rc6 +EXTRAVERSION = -rc7 NAME = Kleptomaniac Octopus # *DOCUMENTATION* diff --git a/arch/arm/boot/compressed/vmlinux.lds.S b/arch/arm/boot/compressed/vmlinux.lds.S index b247f399de71..f82b5962d97e 100644 --- a/arch/arm/boot/compressed/vmlinux.lds.S +++ b/arch/arm/boot/compressed/vmlinux.lds.S @@ -42,7 +42,7 @@ SECTIONS } .table : ALIGN(4) { _table_start = .; - LONG(ZIMAGE_MAGIC(2)) + LONG(ZIMAGE_MAGIC(4)) LONG(ZIMAGE_MAGIC(0x5a534c4b)) LONG(ZIMAGE_MAGIC(__piggy_size_addr - _start)) LONG(ZIMAGE_MAGIC(_kernel_bss_size)) diff --git a/arch/arm/boot/dts/am437x-gp-evm.dts b/arch/arm/boot/dts/am437x-gp-evm.dts index 811c8cae315b..d692e3b2812a 100644 --- a/arch/arm/boot/dts/am437x-gp-evm.dts +++ b/arch/arm/boot/dts/am437x-gp-evm.dts @@ -943,7 +943,7 @@ &cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; }; &elm { diff --git a/arch/arm/boot/dts/am437x-idk-evm.dts b/arch/arm/boot/dts/am437x-idk-evm.dts index 9f66f96d09c9..a958f9ee4a5a 100644 --- a/arch/arm/boot/dts/am437x-idk-evm.dts +++ b/arch/arm/boot/dts/am437x-idk-evm.dts @@ -504,7 +504,7 @@ &cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; }; &rtc { diff --git a/arch/arm/boot/dts/am437x-sk-evm.dts b/arch/arm/boot/dts/am437x-sk-evm.dts index 25222497f828..4d5a7ca2e25d 100644 --- a/arch/arm/boot/dts/am437x-sk-evm.dts +++ b/arch/arm/boot/dts/am437x-sk-evm.dts @@ -833,13 +833,13 @@ &cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; dual_emac_res_vlan = <1>; }; &cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; dual_emac_res_vlan = <2>; }; diff --git a/arch/arm/boot/dts/am571x-idk.dts b/arch/arm/boot/dts/am571x-idk.dts index 669559c9c95b..c13756fa0f55 100644 --- a/arch/arm/boot/dts/am571x-idk.dts +++ b/arch/arm/boot/dts/am571x-idk.dts @@ -190,13 +190,13 @@ &cpsw_port1 { phy-handle = <ðphy0_sw>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; ti,dual-emac-pvid = <1>; }; &cpsw_port2 { phy-handle = <ðphy1_sw>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; ti,dual-emac-pvid = <2>; }; diff --git a/arch/arm/boot/dts/am57xx-beagle-x15-common.dtsi b/arch/arm/boot/dts/am57xx-beagle-x15-common.dtsi index a813a0cf3ff3..565675354de4 100644 --- a/arch/arm/boot/dts/am57xx-beagle-x15-common.dtsi +++ b/arch/arm/boot/dts/am57xx-beagle-x15-common.dtsi @@ -433,13 +433,13 @@ &cpsw_emac0 { phy-handle = <&phy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; dual_emac_res_vlan = <1>; }; &cpsw_emac1 { phy-handle = <&phy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; dual_emac_res_vlan = <2>; }; diff --git a/arch/arm/boot/dts/am57xx-idk-common.dtsi b/arch/arm/boot/dts/am57xx-idk-common.dtsi index aa5e55f98179..a3ff1237d1fa 100644 --- a/arch/arm/boot/dts/am57xx-idk-common.dtsi +++ b/arch/arm/boot/dts/am57xx-idk-common.dtsi @@ -408,13 +408,13 @@ &cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; dual_emac_res_vlan = <1>; }; &cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-rxid"; dual_emac_res_vlan = <2>; }; diff --git a/arch/arm/boot/dts/bcm-hr2.dtsi b/arch/arm/boot/dts/bcm-hr2.dtsi index 6142c672811e..5e5f5ca3c86f 100644 --- a/arch/arm/boot/dts/bcm-hr2.dtsi +++ b/arch/arm/boot/dts/bcm-hr2.dtsi @@ -75,7 +75,7 @@ timer@20200 { compatible = "arm,cortex-a9-global-timer"; reg = <0x20200 0x100>; - interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>; + interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>; clocks = <&periph_clk>; }; @@ -83,7 +83,7 @@ compatible = "arm,cortex-a9-twd-timer"; reg = <0x20600 0x20>; interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(1) | - IRQ_TYPE_LEVEL_HIGH)>; + IRQ_TYPE_EDGE_RISING)>; clocks = <&periph_clk>; }; @@ -91,7 +91,7 @@ compatible = "arm,cortex-a9-twd-wdt"; reg = <0x20620 0x20>; interrupts = <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(1) | - IRQ_TYPE_LEVEL_HIGH)>; + IRQ_TYPE_EDGE_RISING)>; clocks = <&periph_clk>; }; diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts index 4c3f606e5b8d..f65448c01e31 100644 --- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts +++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts @@ -24,7 +24,7 @@ leds { act { - gpios = <&gpio 47 GPIO_ACTIVE_HIGH>; + gpios = <&gpio 47 GPIO_ACTIVE_LOW>; }; }; diff --git a/arch/arm/boot/dts/dm814x.dtsi b/arch/arm/boot/dts/dm814x.dtsi index 44ed5a798164..c28ca0540f03 100644 --- a/arch/arm/boot/dts/dm814x.dtsi +++ b/arch/arm/boot/dts/dm814x.dtsi @@ -693,7 +693,7 @@ davinci_mdio: mdio@800 { compatible = "ti,cpsw-mdio", "ti,davinci_mdio"; - clocks = <&alwon_ethernet_clkctrl DM814_ETHERNET_CPGMAC0_CLKCTRL 0>; + clocks = <&cpsw_125mhz_gclk>; clock-names = "fck"; #address-cells = <1>; #size-cells = <0>; diff --git a/arch/arm/boot/dts/imx6q-b450v3.dts b/arch/arm/boot/dts/imx6q-b450v3.dts index 95b8f2d71821..fb0980190aa0 100644 --- a/arch/arm/boot/dts/imx6q-b450v3.dts +++ b/arch/arm/boot/dts/imx6q-b450v3.dts @@ -65,13 +65,6 @@ }; }; -&clks { - assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>, - <&clks IMX6QDL_CLK_LDB_DI1_SEL>; - assigned-clock-parents = <&clks IMX6QDL_CLK_PLL3_USB_OTG>, - <&clks IMX6QDL_CLK_PLL3_USB_OTG>; -}; - &ldb { status = "okay"; diff --git a/arch/arm/boot/dts/imx6q-b650v3.dts b/arch/arm/boot/dts/imx6q-b650v3.dts index 611cb7ae7e55..8f762d9c5ae9 100644 --- a/arch/arm/boot/dts/imx6q-b650v3.dts +++ b/arch/arm/boot/dts/imx6q-b650v3.dts @@ -65,13 +65,6 @@ }; }; -&clks { - assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>, - <&clks IMX6QDL_CLK_LDB_DI1_SEL>; - assigned-clock-parents = <&clks IMX6QDL_CLK_PLL3_USB_OTG>, - <&clks IMX6QDL_CLK_PLL3_USB_OTG>; -}; - &ldb { status = "okay"; diff --git a/arch/arm/boot/dts/imx6q-b850v3.dts b/arch/arm/boot/dts/imx6q-b850v3.dts index e4cb118f88c6..1ea64ecf4291 100644 --- a/arch/arm/boot/dts/imx6q-b850v3.dts +++ b/arch/arm/boot/dts/imx6q-b850v3.dts @@ -53,17 +53,6 @@ }; }; -&clks { - assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>, - <&clks IMX6QDL_CLK_LDB_DI1_SEL>, - <&clks IMX6QDL_CLK_IPU1_DI0_PRE_SEL>, - <&clks IMX6QDL_CLK_IPU2_DI0_PRE_SEL>; - assigned-clock-parents = <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>, - <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>, - <&clks IMX6QDL_CLK_PLL2_PFD2_396M>, - <&clks IMX6QDL_CLK_PLL2_PFD2_396M>; -}; - &ldb { fsl,dual-channel; status = "okay"; diff --git a/arch/arm/boot/dts/imx6q-bx50v3.dtsi b/arch/arm/boot/dts/imx6q-bx50v3.dtsi index fa27dcdf06f1..1938b04199c4 100644 --- a/arch/arm/boot/dts/imx6q-bx50v3.dtsi +++ b/arch/arm/boot/dts/imx6q-bx50v3.dtsi @@ -377,3 +377,18 @@ #interrupt-cells = <1>; }; }; + +&clks { + assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>, + <&clks IMX6QDL_CLK_LDB_DI1_SEL>, + <&clks IMX6QDL_CLK_IPU1_DI0_PRE_SEL>, + <&clks IMX6QDL_CLK_IPU1_DI1_PRE_SEL>, + <&clks IMX6QDL_CLK_IPU2_DI0_PRE_SEL>, + <&clks IMX6QDL_CLK_IPU2_DI1_PRE_SEL>; + assigned-clock-parents = <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>, + <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>, + <&clks IMX6QDL_CLK_PLL2_PFD0_352M>, + <&clks IMX6QDL_CLK_PLL2_PFD0_352M>, + <&clks IMX6QDL_CLK_PLL2_PFD0_352M>, + <&clks IMX6QDL_CLK_PLL2_PFD0_352M>; +}; diff --git a/arch/arm/boot/dts/mmp3-dell-ariel.dts b/arch/arm/boot/dts/mmp3-dell-ariel.dts index 15449c72c042..b0ec14c42164 100644 --- a/arch/arm/boot/dts/mmp3-dell-ariel.dts +++ b/arch/arm/boot/dts/mmp3-dell-ariel.dts @@ -98,19 +98,19 @@ status = "okay"; }; -&ssp3 { +&ssp1 { status = "okay"; - cs-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>; + cs-gpios = <&gpio 46 GPIO_ACTIVE_LOW>; firmware-flash@0 { - compatible = "st,m25p80", "jedec,spi-nor"; + compatible = "winbond,w25q32", "jedec,spi-nor"; reg = <0>; - spi-max-frequency = <40000000>; + spi-max-frequency = <104000000>; m25p,fast-read; }; }; -&ssp4 { - cs-gpios = <&gpio 56 GPIO_ACTIVE_HIGH>; +&ssp2 { + cs-gpios = <&gpio 56 GPIO_ACTIVE_LOW>; status = "okay"; }; diff --git a/arch/arm/boot/dts/mmp3.dtsi b/arch/arm/boot/dts/mmp3.dtsi index 9b5087a95e73..826f0a577859 100644 --- a/arch/arm/boot/dts/mmp3.dtsi +++ b/arch/arm/boot/dts/mmp3.dtsi @@ -202,8 +202,7 @@ }; hsic_phy0: hsic-phy@f0001800 { - compatible = "marvell,mmp3-hsic-phy", - "usb-nop-xceiv"; + compatible = "marvell,mmp3-hsic-phy"; reg = <0xf0001800 0x40>; #phy-cells = <0>; status = "disabled"; @@ -224,8 +223,7 @@ }; hsic_phy1: hsic-phy@f0002800 { - compatible = "marvell,mmp3-hsic-phy", - "usb-nop-xceiv"; + compatible = "marvell,mmp3-hsic-phy"; reg = <0xf0002800 0x40>; #phy-cells = <0>; status = "disabled"; @@ -531,7 +529,7 @@ }; soc_clocks: clocks@d4050000 { - compatible = "marvell,mmp2-clock"; + compatible = "marvell,mmp3-clock"; reg = <0xd4050000 0x1000>, <0xd4282800 0x400>, <0xd4015000 0x1000>; diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 99929122dad7..3546d294d55f 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -18,11 +18,11 @@ #endif #include <asm/ptrace.h> -#include <asm/domain.h> #include <asm/opcodes-virt.h> #include <asm/asm-offsets.h> #include <asm/page.h> #include <asm/thread_info.h> +#include <asm/uaccess-asm.h> #define IOMEM(x) (x) @@ -446,79 +446,6 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .size \name , . - \name .endm - .macro csdb -#ifdef CONFIG_THUMB2_KERNEL - .inst.w 0xf3af8014 -#else - .inst 0xe320f014 -#endif - .endm - - .macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req -#ifndef CONFIG_CPU_USE_DOMAINS - adds \tmp, \addr, #\size - 1 - sbcscc \tmp, \tmp, \limit - bcs \bad -#ifdef CONFIG_CPU_SPECTRE - movcs \addr, #0 - csdb -#endif -#endif - .endm - - .macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req -#ifdef CONFIG_CPU_SPECTRE - sub \tmp, \limit, #1 - subs \tmp, \tmp, \addr @ tmp = limit - 1 - addr - addhs \tmp, \tmp, #1 @ if (tmp >= 0) { - subshs \tmp, \tmp, \size @ tmp = limit - (addr + size) } - movlo \addr, #0 @ if (tmp < 0) addr = NULL - csdb -#endif - .endm - - .macro uaccess_disable, tmp, isb=1 -#ifdef CONFIG_CPU_SW_DOMAIN_PAN - /* - * Whenever we re-enter userspace, the domains should always be - * set appropriately. - */ - mov \tmp, #DACR_UACCESS_DISABLE - mcr p15, 0, \tmp, c3, c0, 0 @ Set domain register - .if \isb - instr_sync - .endif -#endif - .endm - - .macro uaccess_enable, tmp, isb=1 -#ifdef CONFIG_CPU_SW_DOMAIN_PAN - /* - * Whenever we re-enter userspace, the domains should always be - * set appropriately. - */ - mov \tmp, #DACR_UACCESS_ENABLE - mcr p15, 0, \tmp, c3, c0, 0 - .if \isb - instr_sync - .endif -#endif - .endm - - .macro uaccess_save, tmp -#ifdef CONFIG_CPU_SW_DOMAIN_PAN - mrc p15, 0, \tmp, c3, c0, 0 - str \tmp, [sp, #SVC_DACR] -#endif - .endm - - .macro uaccess_restore -#ifdef CONFIG_CPU_SW_DOMAIN_PAN - ldr r0, [sp, #SVC_DACR] - mcr p15, 0, r0, c3, c0, 0 -#endif - .endm - .irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo .macro ret\c, reg #if __LINUX_ARM_ARCH__ < 6 diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h new file mode 100644 index 000000000000..907571fd05c6 --- /dev/null +++ b/arch/arm/include/asm/uaccess-asm.h @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef __ASM_UACCESS_ASM_H__ +#define __ASM_UACCESS_ASM_H__ + +#include <asm/asm-offsets.h> +#include <asm/domain.h> +#include <asm/memory.h> +#include <asm/thread_info.h> + + .macro csdb +#ifdef CONFIG_THUMB2_KERNEL + .inst.w 0xf3af8014 +#else + .inst 0xe320f014 +#endif + .endm + + .macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req +#ifndef CONFIG_CPU_USE_DOMAINS + adds \tmp, \addr, #\size - 1 + sbcscc \tmp, \tmp, \limit + bcs \bad +#ifdef CONFIG_CPU_SPECTRE + movcs \addr, #0 + csdb +#endif +#endif + .endm + + .macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req +#ifdef CONFIG_CPU_SPECTRE + sub \tmp, \limit, #1 + subs \tmp, \tmp, \addr @ tmp = limit - 1 - addr + addhs \tmp, \tmp, #1 @ if (tmp >= 0) { + subshs \tmp, \tmp, \size @ tmp = limit - (addr + size) } + movlo \addr, #0 @ if (tmp < 0) addr = NULL + csdb +#endif + .endm + + .macro uaccess_disable, tmp, isb=1 +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + /* + * Whenever we re-enter userspace, the domains should always be + * set appropriately. + */ + mov \tmp, #DACR_UACCESS_DISABLE + mcr p15, 0, \tmp, c3, c0, 0 @ Set domain register + .if \isb + instr_sync + .endif +#endif + .endm + + .macro uaccess_enable, tmp, isb=1 +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + /* + * Whenever we re-enter userspace, the domains should always be + * set appropriately. + */ + mov \tmp, #DACR_UACCESS_ENABLE + mcr p15, 0, \tmp, c3, c0, 0 + .if \isb + instr_sync + .endif +#endif + .endm + +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) || defined(CONFIG_CPU_USE_DOMAINS) +#define DACR(x...) x +#else +#define DACR(x...) +#endif + + /* + * Save the address limit on entry to a privileged exception. + * + * If we are using the DACR for kernel access by the user accessors + * (CONFIG_CPU_USE_DOMAINS=y), always reset the DACR kernel domain + * back to client mode, whether or not \disable is set. + * + * If we are using SW PAN, set the DACR user domain to no access + * if \disable is set. + */ + .macro uaccess_entry, tsk, tmp0, tmp1, tmp2, disable + ldr \tmp1, [\tsk, #TI_ADDR_LIMIT] + mov \tmp2, #TASK_SIZE + str \tmp2, [\tsk, #TI_ADDR_LIMIT] + DACR( mrc p15, 0, \tmp0, c3, c0, 0) + DACR( str \tmp0, [sp, #SVC_DACR]) + str \tmp1, [sp, #SVC_ADDR_LIMIT] + .if \disable && IS_ENABLED(CONFIG_CPU_SW_DOMAIN_PAN) + /* kernel=client, user=no access */ + mov \tmp2, #DACR_UACCESS_DISABLE + mcr p15, 0, \tmp2, c3, c0, 0 + instr_sync + .elseif IS_ENABLED(CONFIG_CPU_USE_DOMAINS) + /* kernel=client */ + bic \tmp2, \tmp0, #domain_mask(DOMAIN_KERNEL) + orr \tmp2, \tmp2, #domain_val(DOMAIN_KERNEL, DOMAIN_CLIENT) + mcr p15, 0, \tmp2, c3, c0, 0 + instr_sync + .endif + .endm + + /* Restore the user access state previously saved by uaccess_entry */ + .macro uaccess_exit, tsk, tmp0, tmp1 + ldr \tmp1, [sp, #SVC_ADDR_LIMIT] + DACR( ldr \tmp0, [sp, #SVC_DACR]) + str \tmp1, [\tsk, #TI_ADDR_LIMIT] + DACR( mcr p15, 0, \tmp0, c3, c0, 0) + .endm + +#undef DACR + +#endif /* __ASM_UACCESS_ASM_H__ */ diff --git a/arch/arm/kernel/atags_proc.c b/arch/arm/kernel/atags_proc.c index 4247ebf4b893..3c2faf2bd124 100644 --- a/arch/arm/kernel/atags_proc.c +++ b/arch/arm/kernel/atags_proc.c @@ -42,7 +42,7 @@ static int __init init_atags_procfs(void) size_t size; if (tag->hdr.tag != ATAG_CORE) { - pr_info("No ATAGs?"); + pr_info("No ATAGs?\n"); return -EINVAL; } diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 77f54830554c..55a47df04773 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -27,6 +27,7 @@ #include <asm/unistd.h> #include <asm/tls.h> #include <asm/system_info.h> +#include <asm/uaccess-asm.h> #include "entry-header.S" #include <asm/entry-macro-multi.S> @@ -179,15 +180,7 @@ ENDPROC(__und_invalid) stmia r7, {r2 - r6} get_thread_info tsk - ldr r0, [tsk, #TI_ADDR_LIMIT] - mov r1, #TASK_SIZE - str r1, [tsk, #TI_ADDR_LIMIT] - str r0, [sp, #SVC_ADDR_LIMIT] - - uaccess_save r0 - .if \uaccess - uaccess_disable r0 - .endif + uaccess_entry tsk, r0, r1, r2, \uaccess .if \trace #ifdef CONFIG_TRACE_IRQFLAGS diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index 32051ec5b33f..40db0f9188b6 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -6,6 +6,7 @@ #include <asm/asm-offsets.h> #include <asm/errno.h> #include <asm/thread_info.h> +#include <asm/uaccess-asm.h> #include <asm/v7m.h> @ Bad Abort numbers @@ -217,9 +218,7 @@ blne trace_hardirqs_off #endif .endif - ldr r1, [sp, #SVC_ADDR_LIMIT] - uaccess_restore - str r1, [tsk, #TI_ADDR_LIMIT] + uaccess_exit tsk, r0, r1 #ifndef CONFIG_THUMB2_KERNEL @ ARM mode SVC restore @@ -263,9 +262,7 @@ @ on the stack remains correct). @ .macro svc_exit_via_fiq - ldr r1, [sp, #SVC_ADDR_LIMIT] - uaccess_restore - str r1, [tsk, #TI_ADDR_LIMIT] + uaccess_exit tsk, r0, r1 #ifndef CONFIG_THUMB2_KERNEL @ ARM mode restore mov r0, sp diff --git a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c index b606cded90cd..4cc6a7eff635 100644 --- a/arch/arm/kernel/ptrace.c +++ b/arch/arm/kernel/ptrace.c @@ -219,8 +219,8 @@ static struct undef_hook arm_break_hook = { }; static struct undef_hook thumb_break_hook = { - .instr_mask = 0xffff, - .instr_val = 0xde01, + .instr_mask = 0xffffffff, + .instr_val = 0x0000de01, .cpsr_mask = PSR_T_BIT, .cpsr_val = PSR_T_BIT, .fn = break_trap, diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi index ccb8e88a60c5..d819e44d94a8 100644 --- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi @@ -1402,8 +1402,8 @@ "venc_lt_sel"; assigned-clocks = <&topckgen CLK_TOP_VENC_SEL>, <&topckgen CLK_TOP_VENC_LT_SEL>; - assigned-clock-parents = <&topckgen CLK_TOP_VENCPLL_D2>, - <&topckgen CLK_TOP_UNIVPLL1_D2>; + assigned-clock-parents = <&topckgen CLK_TOP_VCODECPLL>, + <&topckgen CLK_TOP_VCODECPLL_370P5>; }; jpegdec: jpegdec@18004000 { diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 061f60fe452f..bb813d06114a 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -176,7 +176,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) panic("CPU%u detected unsupported configuration\n", cpu); } - return ret; + return -EIO; } static void init_gic_priority_masking(void) diff --git a/arch/csky/abiv1/inc/abi/entry.h b/arch/csky/abiv1/inc/abi/entry.h index 61d94ec7dd16..13c23e2c707c 100644 --- a/arch/csky/abiv1/inc/abi/entry.h +++ b/arch/csky/abiv1/inc/abi/entry.h @@ -80,7 +80,6 @@ .endm .macro RESTORE_ALL - psrclr ie ldw lr, (sp, 4) ldw a0, (sp, 8) mtcr a0, epc @@ -175,9 +174,4 @@ movi r6, 0 cpwcr r6, cpcr31 .endm - -.macro ANDI_R3 rx, imm - lsri \rx, 3 - andi \rx, (\imm >> 3) -.endm #endif /* __ASM_CSKY_ENTRY_H */ diff --git a/arch/csky/abiv2/inc/abi/entry.h b/arch/csky/abiv2/inc/abi/entry.h index ab63c41abcca..4fdd6c12e7ff 100644 --- a/arch/csky/abiv2/inc/abi/entry.h +++ b/arch/csky/abiv2/inc/abi/entry.h @@ -13,6 +13,8 @@ #define LSAVE_A1 28 #define LSAVE_A2 32 #define LSAVE_A3 36 +#define LSAVE_A4 40 +#define LSAVE_A5 44 #define KSPTOUSP #define USPTOKSP @@ -63,7 +65,6 @@ .endm .macro RESTORE_ALL - psrclr ie ldw tls, (sp, 0) ldw lr, (sp, 4) ldw a0, (sp, 8) @@ -301,9 +302,4 @@ jmpi 3f /* jump to va */ 3: .endm - -.macro ANDI_R3 rx, imm - lsri \rx, 3 - andi \rx, (\imm >> 3) -.endm #endif /* __ASM_CSKY_ENTRY_H */ diff --git a/arch/csky/include/asm/thread_info.h b/arch/csky/include/asm/thread_info.h index 5c61e84e790f..8980e4e64391 100644 --- a/arch/csky/include/asm/thread_info.h +++ b/arch/csky/include/asm/thread_info.h @@ -81,4 +81,10 @@ static inline struct thread_info *current_thread_info(void) #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) #define _TIF_SECCOMP (1 << TIF_SECCOMP) +#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ + _TIF_NOTIFY_RESUME | _TIF_UPROBE) + +#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ + _TIF_SYSCALL_TRACEPOINT) + #endif /* _ASM_CSKY_THREAD_INFO_H */ diff --git a/arch/csky/kernel/entry.S b/arch/csky/kernel/entry.S index 3760397fdd3d..f13800383a19 100644 --- a/arch/csky/kernel/entry.S +++ b/arch/csky/kernel/entry.S @@ -128,39 +128,41 @@ tlbop_end 1 ENTRY(csky_systemcall) SAVE_ALL TRAP0_SIZE zero_fp -#ifdef CONFIG_RSEQ_DEBUG - mov a0, sp - jbsr rseq_syscall -#endif psrset ee, ie - lrw r11, __NR_syscalls - cmphs syscallid, r11 /* Check nr of syscall */ - bt ret_from_exception + lrw r9, __NR_syscalls + cmphs syscallid, r9 /* Check nr of syscall */ + bt 1f - lrw r13, sys_call_table - ixw r13, syscallid - ldw r11, (r13) - cmpnei r11, 0 + lrw r9, sys_call_table + ixw r9, syscallid + ldw syscallid, (r9) + cmpnei syscallid, 0 bf ret_from_exception mov r9, sp bmaski r10, THREAD_SHIFT andn r9, r10 - ldw r12, (r9, TINFO_FLAGS) - ANDI_R3 r12, (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_TRACEPOINT | _TIF_SYSCALL_AUDIT) - cmpnei r12, 0 + ldw r10, (r9, TINFO_FLAGS) + lrw r9, _TIF_SYSCALL_WORK + and r10, r9 + cmpnei r10, 0 bt csky_syscall_trace #if defined(__CSKYABIV2__) subi sp, 8 stw r5, (sp, 0x4) stw r4, (sp, 0x0) - jsr r11 /* Do system call */ + jsr syscallid /* Do system call */ addi sp, 8 #else - jsr r11 + jsr syscallid #endif stw a0, (sp, LSAVE_A0) /* Save return value */ +1: +#ifdef CONFIG_DEBUG_RSEQ + mov a0, sp + jbsr rseq_syscall +#endif jmpi ret_from_exception csky_syscall_trace: @@ -173,18 +175,23 @@ csky_syscall_trace: ldw a3, (sp, LSAVE_A3) #if defined(__CSKYABIV2__) subi sp, 8 - stw r5, (sp, 0x4) - stw r4, (sp, 0x0) + ldw r9, (sp, LSAVE_A4) + stw r9, (sp, 0x0) + ldw r9, (sp, LSAVE_A5) + stw r9, (sp, 0x4) + jsr syscallid /* Do system call */ + addi sp, 8 #else ldw r6, (sp, LSAVE_A4) ldw r7, (sp, LSAVE_A5) -#endif - jsr r11 /* Do system call */ -#if defined(__CSKYABIV2__) - addi sp, 8 + jsr syscallid /* Do system call */ #endif stw a0, (sp, LSAVE_A0) /* Save return value */ +#ifdef CONFIG_DEBUG_RSEQ + mov a0, sp + jbsr rseq_syscall +#endif mov a0, sp /* right now, sp --> pt_regs */ jbsr syscall_trace_exit br ret_from_exception @@ -200,18 +207,20 @@ ENTRY(ret_from_fork) mov r9, sp bmaski r10, THREAD_SHIFT andn r9, r10 - ldw r12, (r9, TINFO_FLAGS) - ANDI_R3 r12, (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_TRACEPOINT | _TIF_SYSCALL_AUDIT) - cmpnei r12, 0 + ldw r10, (r9, TINFO_FLAGS) + lrw r9, _TIF_SYSCALL_WORK + and r10, r9 + cmpnei r10, 0 bf ret_from_exception mov a0, sp /* sp = pt_regs pointer */ jbsr syscall_trace_exit ret_from_exception: - ld syscallid, (sp, LSAVE_PSR) - btsti syscallid, 31 - bt 1f + psrclr ie + ld r9, (sp, LSAVE_PSR) + btsti r9, 31 + bt 1f /* * Load address of current->thread_info, Then get address of task_struct * Get task_needreshed in task_struct @@ -220,11 +229,24 @@ ret_from_exception: bmaski r10, THREAD_SHIFT andn r9, r10 - ldw r12, (r9, TINFO_FLAGS) - andi r12, (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | _TIF_UPROBE) - cmpnei r12, 0 + ldw r10, (r9, TINFO_FLAGS) + lrw r9, _TIF_WORK_MASK + and r10, r9 + cmpnei r10, 0 bt exit_work 1: +#ifdef CONFIG_PREEMPTION + mov r9, sp + bmaski r10, THREAD_SHIFT + andn r9, r10 + + ldw r10, (r9, TINFO_PREEMPT) + cmpnei r10, 0 + bt 2f + jbsr preempt_schedule_irq /* irq en/disable is done inside */ +2: +#endif + #ifdef CONFIG_TRACE_IRQFLAGS ld r10, (sp, LSAVE_PSR) btsti r10, 6 @@ -235,14 +257,15 @@ ret_from_exception: RESTORE_ALL exit_work: - lrw syscallid, ret_from_exception - mov lr, syscallid + lrw r9, ret_from_exception + mov lr, r9 - btsti r12, TIF_NEED_RESCHED + btsti r10, TIF_NEED_RESCHED bt work_resched + psrset ie mov a0, sp - mov a1, r12 + mov a1, r10 jmpi do_notify_resume work_resched: @@ -291,34 +314,10 @@ ENTRY(csky_irq) jbsr trace_hardirqs_off #endif -#ifdef CONFIG_PREEMPTION - mov r9, sp /* Get current stack pointer */ - bmaski r10, THREAD_SHIFT - andn r9, r10 /* Get thread_info */ - - /* - * Get task_struct->stack.preempt_count for current, - * and increase 1. - */ - ldw r12, (r9, TINFO_PREEMPT) - addi r12, 1 - stw r12, (r9, TINFO_PREEMPT) -#endif mov a0, sp jbsr csky_do_IRQ -#ifdef CONFIG_PREEMPTION - subi r12, 1 - stw r12, (r9, TINFO_PREEMPT) - cmpnei r12, 0 - bt 2f - ldw r12, (r9, TINFO_FLAGS) - btsti r12, TIF_NEED_RESCHED - bf 2f - jbsr preempt_schedule_irq /* irq en/disable is done inside */ -#endif -2: jmpi ret_from_exception /* diff --git a/arch/ia64/include/asm/device.h b/arch/ia64/include/asm/device.h index 410a769ece95..3eb397415381 100644 --- a/arch/ia64/include/asm/device.h +++ b/arch/ia64/include/asm/device.h @@ -6,7 +6,7 @@ #define _ASM_IA64_DEVICE_H struct dev_archdata { -#ifdef CONFIG_INTEL_IOMMU +#ifdef CONFIG_IOMMU_API void *iommu; /* hook for IOMMU specific extension */ #endif }; diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index 5224fb38d766..01d7071b23f7 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -562,7 +562,7 @@ void __init mem_init(void) > BITS_PER_LONG); high_memory = __va((max_pfn << PAGE_SHIFT)); - set_max_mapnr(page_to_pfn(virt_to_page(high_memory - 1)) + 1); + set_max_mapnr(max_low_pfn); memblock_free_all(); #ifdef CONFIG_PA11 diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d13b5328ca10..b29d7cb38368 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -126,6 +126,7 @@ config PPC select ARCH_HAS_MMIOWB if PPC64 select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_PMEM_API + select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_MEMBARRIER_CALLBACKS diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index 1c4385852d3d..244542ae2a91 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -162,6 +162,9 @@ UBSAN_SANITIZE_kprobes.o := n GCOV_PROFILE_kprobes-ftrace.o := n KCOV_INSTRUMENT_kprobes-ftrace.o := n UBSAN_SANITIZE_kprobes-ftrace.o := n +GCOV_PROFILE_syscall_64.o := n +KCOV_INSTRUMENT_syscall_64.o := n +UBSAN_SANITIZE_syscall_64.o := n UBSAN_SANITIZE_vdso.o := n # Necessary for booting with kcov enabled on book3e machines diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index b0ad930cbae5..ebeebab74b56 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -2411,6 +2411,7 @@ EXC_COMMON_BEGIN(facility_unavailable_common) GEN_COMMON facility_unavailable addi r3,r1,STACK_FRAME_OVERHEAD bl facility_unavailable_exception + REST_NVGPRS(r1) /* instruction emulation may change GPRs */ b interrupt_return GEN_KVM facility_unavailable @@ -2440,6 +2441,7 @@ EXC_COMMON_BEGIN(h_facility_unavailable_common) GEN_COMMON h_facility_unavailable addi r3,r1,STACK_FRAME_OVERHEAD bl facility_unavailable_exception + REST_NVGPRS(r1) /* XXX Shouldn't be necessary in practice */ b interrupt_return GEN_KVM h_facility_unavailable diff --git a/arch/x86/include/asm/device.h b/arch/x86/include/asm/device.h index 7e31f7f1bb06..49bd6cf3eec9 100644 --- a/arch/x86/include/asm/device.h +++ b/arch/x86/include/asm/device.h @@ -3,7 +3,7 @@ #define _ASM_X86_DEVICE_H struct dev_archdata { -#if defined(CONFIG_INTEL_IOMMU) || defined(CONFIG_AMD_IOMMU) +#ifdef CONFIG_IOMMU_API void *iommu; /* hook for IOMMU specific extension */ #endif }; diff --git a/arch/x86/include/asm/dma.h b/arch/x86/include/asm/dma.h index 00f7cf45e699..8e95aa4b0d17 100644 --- a/arch/x86/include/asm/dma.h +++ b/arch/x86/include/asm/dma.h @@ -74,7 +74,7 @@ #define MAX_DMA_PFN ((16UL * 1024 * 1024) >> PAGE_SHIFT) /* 4GB broken PCI/AGP hardware bus master zone */ -#define MAX_DMA32_PFN ((4UL * 1024 * 1024 * 1024) >> PAGE_SHIFT) +#define MAX_DMA32_PFN (1UL << (32 - PAGE_SHIFT)) #ifdef CONFIG_X86_32 /* The maximum address that we can perform a DMA transfer to on this platform */ diff --git a/arch/x86/include/asm/io_bitmap.h b/arch/x86/include/asm/io_bitmap.h index 07344d82e88e..ac1a99ffbd8d 100644 --- a/arch/x86/include/asm/io_bitmap.h +++ b/arch/x86/include/asm/io_bitmap.h @@ -17,7 +17,7 @@ struct task_struct; #ifdef CONFIG_X86_IOPL_IOPERM void io_bitmap_share(struct task_struct *tsk); -void io_bitmap_exit(void); +void io_bitmap_exit(struct task_struct *tsk); void native_tss_update_io_bitmap(void); @@ -29,7 +29,7 @@ void native_tss_update_io_bitmap(void); #else static inline void io_bitmap_share(struct task_struct *tsk) { } -static inline void io_bitmap_exit(void) { } +static inline void io_bitmap_exit(struct task_struct *tsk) { } static inline void tss_update_io_bitmap(void) { } #endif diff --git a/arch/x86/include/uapi/asm/unistd.h b/arch/x86/include/uapi/asm/unistd.h index 196fdd02b8b1..be5e2e747f50 100644 --- a/arch/x86/include/uapi/asm/unistd.h +++ b/arch/x86/include/uapi/asm/unistd.h @@ -2,8 +2,15 @@ #ifndef _UAPI_ASM_X86_UNISTD_H #define _UAPI_ASM_X86_UNISTD_H -/* x32 syscall flag bit */ -#define __X32_SYSCALL_BIT 0x40000000UL +/* + * x32 syscall flag bit. Some user programs expect syscall NR macros + * and __X32_SYSCALL_BIT to have type int, even though syscall numbers + * are, for practical purposes, unsigned long. + * + * Fortunately, expressions like (nr & ~__X32_SYSCALL_BIT) do the right + * thing regardless. + */ +#define __X32_SYSCALL_BIT 0x40000000 #ifndef __KERNEL__ # ifdef __i386__ diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 32b153d38748..6a54e83d5589 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -957,18 +957,31 @@ static inline bool xfeatures_mxcsr_quirk(u64 xfeatures) return true; } -/* - * This is similar to user_regset_copyout(), but will not add offset to - * the source data pointer or increment pos, count, kbuf, and ubuf. - */ -static inline void -__copy_xstate_to_kernel(void *kbuf, const void *data, - unsigned int offset, unsigned int size, unsigned int size_total) +static void fill_gap(unsigned to, void **kbuf, unsigned *pos, unsigned *count) { - if (offset < size_total) { - unsigned int copy = min(size, size_total - offset); + if (*pos < to) { + unsigned size = to - *pos; + + if (size > *count) + size = *count; + memcpy(*kbuf, (void *)&init_fpstate.xsave + *pos, size); + *kbuf += size; + *pos += size; + *count -= size; + } +} - memcpy(kbuf + offset, data, copy); +static void copy_part(unsigned offset, unsigned size, void *from, + void **kbuf, unsigned *pos, unsigned *count) +{ + fill_gap(offset, kbuf, pos, count); + if (size > *count) + size = *count; + if (size) { + memcpy(*kbuf, from, size); + *kbuf += size; + *pos += size; + *count -= size; } } @@ -981,8 +994,9 @@ __copy_xstate_to_kernel(void *kbuf, const void *data, */ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int offset_start, unsigned int size_total) { - unsigned int offset, size; struct xstate_header header; + const unsigned off_mxcsr = offsetof(struct fxregs_state, mxcsr); + unsigned count = size_total; int i; /* @@ -998,46 +1012,42 @@ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int of header.xfeatures = xsave->header.xfeatures; header.xfeatures &= ~XFEATURE_MASK_SUPERVISOR; + if (header.xfeatures & XFEATURE_MASK_FP) + copy_part(0, off_mxcsr, + &xsave->i387, &kbuf, &offset_start, &count); + if (header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM)) + copy_part(off_mxcsr, MXCSR_AND_FLAGS_SIZE, + &xsave->i387.mxcsr, &kbuf, &offset_start, &count); + if (header.xfeatures & XFEATURE_MASK_FP) + copy_part(offsetof(struct fxregs_state, st_space), 128, + &xsave->i387.st_space, &kbuf, &offset_start, &count); + if (header.xfeatures & XFEATURE_MASK_SSE) + copy_part(xstate_offsets[XFEATURE_MASK_SSE], 256, + &xsave->i387.xmm_space, &kbuf, &offset_start, &count); + /* + * Fill xsave->i387.sw_reserved value for ptrace frame: + */ + copy_part(offsetof(struct fxregs_state, sw_reserved), 48, + xstate_fx_sw_bytes, &kbuf, &offset_start, &count); /* * Copy xregs_state->header: */ - offset = offsetof(struct xregs_state, header); - size = sizeof(header); - - __copy_xstate_to_kernel(kbuf, &header, offset, size, size_total); + copy_part(offsetof(struct xregs_state, header), sizeof(header), + &header, &kbuf, &offset_start, &count); - for (i = 0; i < XFEATURE_MAX; i++) { + for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { /* * Copy only in-use xstates: */ if ((header.xfeatures >> i) & 1) { void *src = __raw_xsave_addr(xsave, i); - offset = xstate_offsets[i]; - size = xstate_sizes[i]; - - /* The next component has to fit fully into the output buffer: */ - if (offset + size > size_total) - break; - - __copy_xstate_to_kernel(kbuf, src, offset, size, size_total); + copy_part(xstate_offsets[i], xstate_sizes[i], + src, &kbuf, &offset_start, &count); } } - - if (xfeatures_mxcsr_quirk(header.xfeatures)) { - offset = offsetof(struct fxregs_state, mxcsr); - size = MXCSR_AND_FLAGS_SIZE; - __copy_xstate_to_kernel(kbuf, &xsave->i387.mxcsr, offset, size, size_total); - } - - /* - * Fill xsave->i387.sw_reserved value for ptrace frame: - */ - offset = offsetof(struct fxregs_state, sw_reserved); - size = sizeof(xstate_fx_sw_bytes); - - __copy_xstate_to_kernel(kbuf, xstate_fx_sw_bytes, offset, size, size_total); + fill_gap(size_total, &kbuf, &offset_start, &count); return 0; } diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c index a53e7b4a7419..e2fab3ceb09f 100644 --- a/arch/x86/kernel/ioport.c +++ b/arch/x86/kernel/ioport.c @@ -33,15 +33,15 @@ void io_bitmap_share(struct task_struct *tsk) set_tsk_thread_flag(tsk, TIF_IO_BITMAP); } -static void task_update_io_bitmap(void) +static void task_update_io_bitmap(struct task_struct *tsk) { - struct thread_struct *t = ¤t->thread; + struct thread_struct *t = &tsk->thread; if (t->iopl_emul == 3 || t->io_bitmap) { /* TSS update is handled on exit to user space */ - set_thread_flag(TIF_IO_BITMAP); + set_tsk_thread_flag(tsk, TIF_IO_BITMAP); } else { - clear_thread_flag(TIF_IO_BITMAP); + clear_tsk_thread_flag(tsk, TIF_IO_BITMAP); /* Invalidate TSS */ preempt_disable(); tss_update_io_bitmap(); @@ -49,12 +49,12 @@ static void task_update_io_bitmap(void) } } -void io_bitmap_exit(void) +void io_bitmap_exit(struct task_struct *tsk) { - struct io_bitmap *iobm = current->thread.io_bitmap; + struct io_bitmap *iobm = tsk->thread.io_bitmap; - current->thread.io_bitmap = NULL; - task_update_io_bitmap(); + tsk->thread.io_bitmap = NULL; + task_update_io_bitmap(tsk); if (iobm && refcount_dec_and_test(&iobm->refcnt)) kfree(iobm); } @@ -102,7 +102,7 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on) if (!iobm) return -ENOMEM; refcount_set(&iobm->refcnt, 1); - io_bitmap_exit(); + io_bitmap_exit(current); } /* @@ -134,7 +134,7 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on) } /* All permissions dropped? */ if (max_long == UINT_MAX) { - io_bitmap_exit(); + io_bitmap_exit(current); return 0; } @@ -192,7 +192,7 @@ SYSCALL_DEFINE1(iopl, unsigned int, level) } t->iopl_emul = level; - task_update_io_bitmap(); + task_update_io_bitmap(current); return 0; } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 9da70b279dad..35638f1c5791 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -96,7 +96,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) } /* - * Free current thread data structures etc.. + * Free thread data structures etc.. */ void exit_thread(struct task_struct *tsk) { @@ -104,7 +104,7 @@ void exit_thread(struct task_struct *tsk) struct fpu *fpu = &t->fpu; if (test_thread_flag(TIF_IO_BITMAP)) - io_bitmap_exit(); + io_bitmap_exit(tsk); free_vm86(t); diff --git a/block/blk-core.c b/block/blk-core.c index 7e4a1da0715e..9bfaee050c82 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -891,14 +891,11 @@ generic_make_request_checks(struct bio *bio) } /* - * Non-mq queues do not honor REQ_NOWAIT, so complete a bio - * with BLK_STS_AGAIN status in order to catch -EAGAIN and - * to give a chance to the caller to repeat request gracefully. + * For a REQ_NOWAIT based request, return -EOPNOTSUPP + * if queue is not a request based queue. */ - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_mq(q)) { - status = BLK_STS_AGAIN; - goto end_io; - } + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_mq(q)) + goto not_supported; if (should_fail_bio(bio)) goto end_io; diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c index 59f911e57719..4ad5c5adc0a3 100644 --- a/drivers/base/regmap/regmap.c +++ b/drivers/base/regmap/regmap.c @@ -2936,6 +2936,28 @@ int regmap_update_bits_base(struct regmap *map, unsigned int reg, } EXPORT_SYMBOL_GPL(regmap_update_bits_base); +/** + * regmap_test_bits() - Check if all specified bits are set in a register. + * + * @map: Register map to operate on + * @reg: Register to read from + * @bits: Bits to test + * + * Returns -1 if the underlying regmap_read() fails, 0 if at least one of the + * tested bits is not set and 1 if all tested bits are set. + */ +int regmap_test_bits(struct regmap *map, unsigned int reg, unsigned int bits) +{ + unsigned int val, ret; + + ret = regmap_read(map, reg, &val); + if (ret) + return ret; + + return (val & bits) == bits; +} +EXPORT_SYMBOL_GPL(regmap_test_bits); + void regmap_async_complete_cb(struct regmap_async *async, int ret) { struct regmap *map = async->map; diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c index df7a8a22e53c..1b9743b7f2ef 100644 --- a/drivers/bluetooth/btbcm.c +++ b/drivers/bluetooth/btbcm.c @@ -414,11 +414,12 @@ static const struct bcm_subver_table bcm_usb_subver_table[] = { { 0x2118, "BCM20702A0" }, /* 001.001.024 */ { 0x2126, "BCM4335A0" }, /* 001.001.038 */ { 0x220e, "BCM20702A1" }, /* 001.002.014 */ - { 0x230f, "BCM4354A2" }, /* 001.003.015 */ + { 0x230f, "BCM4356A2" }, /* 001.003.015 */ { 0x4106, "BCM4335B0" }, /* 002.001.006 */ { 0x410e, "BCM20702B0" }, /* 002.001.014 */ { 0x6109, "BCM4335C0" }, /* 003.001.009 */ { 0x610c, "BCM4354" }, /* 003.001.012 */ + { 0x6607, "BCM4350C5" }, /* 003.006.007 */ { } }; diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c index e11169ad8247..6c40bc75fb5b 100644 --- a/drivers/bluetooth/btmtkuart.c +++ b/drivers/bluetooth/btmtkuart.c @@ -695,8 +695,7 @@ static int btmtkuart_change_baudrate(struct hci_dev *hdev) /* Send a dummy byte 0xff to activate the new baudrate */ param = 0xff; - err = serdev_device_write(bdev->serdev, ¶m, sizeof(param), - MAX_SCHEDULE_TIMEOUT); + err = serdev_device_write_buf(bdev->serdev, ¶m, sizeof(param)); if (err < 0 || err < sizeof(param)) return err; @@ -1015,7 +1014,7 @@ static int btmtkuart_probe(struct serdev_device *serdev) if (btmtkuart_is_standalone(bdev)) { err = clk_prepare_enable(bdev->osc); if (err < 0) - return err; + goto err_hci_free_dev; if (bdev->boot) { gpiod_set_value_cansleep(bdev->boot, 1); @@ -1028,10 +1027,8 @@ static int btmtkuart_probe(struct serdev_device *serdev) /* Power on */ err = regulator_enable(bdev->vcc); - if (err < 0) { - clk_disable_unprepare(bdev->osc); - return err; - } + if (err < 0) + goto err_clk_disable_unprepare; /* Reset if the reset-gpios is available otherwise the board * -level design should be guaranteed. @@ -1063,7 +1060,6 @@ static int btmtkuart_probe(struct serdev_device *serdev) err = hci_register_dev(hdev); if (err < 0) { dev_err(&serdev->dev, "Can't register HCI device\n"); - hci_free_dev(hdev); goto err_regulator_disable; } @@ -1072,6 +1068,11 @@ static int btmtkuart_probe(struct serdev_device *serdev) err_regulator_disable: if (btmtkuart_is_standalone(bdev)) regulator_disable(bdev->vcc); +err_clk_disable_unprepare: + if (btmtkuart_is_standalone(bdev)) + clk_disable_unprepare(bdev->osc); +err_hci_free_dev: + hci_free_dev(hdev); return err; } diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c index 3ea866d44568..c5984966f315 100644 --- a/drivers/bluetooth/btqca.c +++ b/drivers/bluetooth/btqca.c @@ -74,17 +74,21 @@ int qca_read_soc_version(struct hci_dev *hdev, u32 *soc_version, ver = (struct qca_btsoc_version *)(edl->data); - BT_DBG("%s: Product:0x%08x", hdev->name, le32_to_cpu(ver->product_id)); - BT_DBG("%s: Patch :0x%08x", hdev->name, le16_to_cpu(ver->patch_ver)); - BT_DBG("%s: ROM :0x%08x", hdev->name, le16_to_cpu(ver->rom_ver)); - BT_DBG("%s: SOC :0x%08x", hdev->name, le32_to_cpu(ver->soc_id)); + bt_dev_info(hdev, "QCA Product ID :0x%08x", + le32_to_cpu(ver->product_id)); + bt_dev_info(hdev, "QCA SOC Version :0x%08x", + le32_to_cpu(ver->soc_id)); + bt_dev_info(hdev, "QCA ROM Version :0x%08x", + le16_to_cpu(ver->rom_ver)); + bt_dev_info(hdev, "QCA Patch Version:0x%08x", + le16_to_cpu(ver->patch_ver)); /* QCA chipset version can be decided by patch and SoC * version, combination with upper 2 bytes from SoC * and lower 2 bytes from patch will be used. */ *soc_version = (le32_to_cpu(ver->soc_id) << 16) | - (le16_to_cpu(ver->rom_ver) & 0x0000ffff); + (le16_to_cpu(ver->rom_ver) & 0x0000ffff); if (*soc_version == 0) err = -EILSEQ; diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c index b3fd07a6f812..81c3c38baba1 100644 --- a/drivers/bluetooth/hci_qca.c +++ b/drivers/bluetooth/hci_qca.c @@ -75,6 +75,9 @@ enum qca_flags { QCA_HW_ERROR_EVENT }; +enum qca_capabilities { + QCA_CAP_WIDEBAND_SPEECH = BIT(0), +}; /* HCI_IBS transmit side sleep protocol states */ enum tx_ibs_states { @@ -111,6 +114,7 @@ struct qca_memdump_data { char *memdump_buf_tail; u32 current_seq_no; u32 received_dump; + u32 ram_dump_size; }; struct qca_memdump_event_hdr { @@ -187,10 +191,11 @@ struct qca_vreg { unsigned int load_uA; }; -struct qca_vreg_data { +struct qca_device_data { enum qca_btsoc_type soc_type; struct qca_vreg *vregs; size_t num_vregs; + uint32_t capabilities; }; /* @@ -972,6 +977,8 @@ static void qca_controller_memdump(struct work_struct *work) char nullBuff[QCA_DUMP_PACKET_SIZE] = { 0 }; u16 seq_no; u32 dump_size; + u32 rx_size; + enum qca_btsoc_type soc_type = qca_soc_type(hu); while ((skb = skb_dequeue(&qca->rx_memdump_q))) { @@ -1021,10 +1028,12 @@ static void qca_controller_memdump(struct work_struct *work) dump_size); queue_delayed_work(qca->workqueue, &qca->ctrl_memdump_timeout, - msecs_to_jiffies(MEMDUMP_TIMEOUT_MS)); + msecs_to_jiffies(MEMDUMP_TIMEOUT_MS) + ); skb_pull(skb, sizeof(dump_size)); memdump_buf = vmalloc(dump_size); + qca_memdump->ram_dump_size = dump_size; qca_memdump->memdump_buf_head = memdump_buf; qca_memdump->memdump_buf_tail = memdump_buf; } @@ -1047,26 +1056,57 @@ static void qca_controller_memdump(struct work_struct *work) * the controller. In such cases let us store the dummy * packets in the buffer. */ + /* For QCA6390, controller does not lost packets but + * sequence number field of packat sometimes has error + * bits, so skip this checking for missing packet. + */ while ((seq_no > qca_memdump->current_seq_no + 1) && - seq_no != QCA_LAST_SEQUENCE_NUM) { + (soc_type != QCA_QCA6390) && + seq_no != QCA_LAST_SEQUENCE_NUM) { bt_dev_err(hu->hdev, "QCA controller missed packet:%d", qca_memdump->current_seq_no); + rx_size = qca_memdump->received_dump; + rx_size += QCA_DUMP_PACKET_SIZE; + if (rx_size > qca_memdump->ram_dump_size) { + bt_dev_err(hu->hdev, + "QCA memdump received %d, no space for missed packet", + qca_memdump->received_dump); + break; + } memcpy(memdump_buf, nullBuff, QCA_DUMP_PACKET_SIZE); memdump_buf = memdump_buf + QCA_DUMP_PACKET_SIZE; qca_memdump->received_dump += QCA_DUMP_PACKET_SIZE; qca_memdump->current_seq_no++; } - memcpy(memdump_buf, (unsigned char *) skb->data, skb->len); - memdump_buf = memdump_buf + skb->len; - qca_memdump->memdump_buf_tail = memdump_buf; - qca_memdump->current_seq_no = seq_no + 1; - qca_memdump->received_dump += skb->len; + rx_size = qca_memdump->received_dump + skb->len; + if (rx_size <= qca_memdump->ram_dump_size) { + if ((seq_no != QCA_LAST_SEQUENCE_NUM) && + (seq_no != qca_memdump->current_seq_no)) + bt_dev_err(hu->hdev, + "QCA memdump unexpected packet %d", + seq_no); + bt_dev_dbg(hu->hdev, + "QCA memdump packet %d with length %d", + seq_no, skb->len); + memcpy(memdump_buf, (unsigned char *)skb->data, + skb->len); + memdump_buf = memdump_buf + skb->len; + qca_memdump->memdump_buf_tail = memdump_buf; + qca_memdump->current_seq_no = seq_no + 1; + qca_memdump->received_dump += skb->len; + } else { + bt_dev_err(hu->hdev, + "QCA memdump received %d, no space for packet %d", + qca_memdump->received_dump, seq_no); + } qca->qca_memdump = qca_memdump; kfree_skb(skb); if (seq_no == QCA_LAST_SEQUENCE_NUM) { - bt_dev_info(hu->hdev, "QCA writing crash dump of size %d bytes", - qca_memdump->received_dump); + bt_dev_info(hu->hdev, + "QCA memdump Done, received %d, total %d", + qca_memdump->received_dump, + qca_memdump->ram_dump_size); memdump_buf = qca_memdump->memdump_buf_head; dev_coredumpv(&hu->serdev->dev, memdump_buf, qca_memdump->received_dump, GFP_KERNEL); @@ -1691,7 +1731,7 @@ static const struct hci_uart_proto qca_proto = { .dequeue = qca_dequeue, }; -static const struct qca_vreg_data qca_soc_data_wcn3990 = { +static const struct qca_device_data qca_soc_data_wcn3990 = { .soc_type = QCA_WCN3990, .vregs = (struct qca_vreg []) { { "vddio", 15000 }, @@ -1702,7 +1742,7 @@ static const struct qca_vreg_data qca_soc_data_wcn3990 = { .num_vregs = 4, }; -static const struct qca_vreg_data qca_soc_data_wcn3991 = { +static const struct qca_device_data qca_soc_data_wcn3991 = { .soc_type = QCA_WCN3991, .vregs = (struct qca_vreg []) { { "vddio", 15000 }, @@ -1711,9 +1751,10 @@ static const struct qca_vreg_data qca_soc_data_wcn3991 = { { "vddch0", 450000 }, }, .num_vregs = 4, + .capabilities = QCA_CAP_WIDEBAND_SPEECH, }; -static const struct qca_vreg_data qca_soc_data_wcn3998 = { +static const struct qca_device_data qca_soc_data_wcn3998 = { .soc_type = QCA_WCN3998, .vregs = (struct qca_vreg []) { { "vddio", 10000 }, @@ -1724,7 +1765,7 @@ static const struct qca_vreg_data qca_soc_data_wcn3998 = { .num_vregs = 4, }; -static const struct qca_vreg_data qca_soc_data_qca6390 = { +static const struct qca_device_data qca_soc_data_qca6390 = { .soc_type = QCA_QCA6390, .num_vregs = 0, }; @@ -1860,7 +1901,7 @@ static int qca_serdev_probe(struct serdev_device *serdev) { struct qca_serdev *qcadev; struct hci_dev *hdev; - const struct qca_vreg_data *data; + const struct qca_device_data *data; int err; bool power_ctrl_enabled = true; @@ -1942,12 +1983,19 @@ static int qca_serdev_probe(struct serdev_device *serdev) } } + hdev = qcadev->serdev_hu.hdev; + if (power_ctrl_enabled) { - hdev = qcadev->serdev_hu.hdev; set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks); hdev->shutdown = qca_power_off; } + /* Wideband speech support must be set per driver since it can't be + * queried via hci. + */ + if (data && (data->capabilities & QCA_CAP_WIDEBAND_SPEECH)) + set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); + return 0; } @@ -1963,10 +2011,43 @@ static void qca_serdev_remove(struct serdev_device *serdev) hci_uart_unregister_device(&qcadev->serdev_hu); } +static void qca_serdev_shutdown(struct device *dev) +{ + int ret; + int timeout = msecs_to_jiffies(CMD_TRANS_TIMEOUT_MS); + struct serdev_device *serdev = to_serdev_device(dev); + struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev); + const u8 ibs_wake_cmd[] = { 0xFD }; + const u8 edl_reset_soc_cmd[] = { 0x01, 0x00, 0xFC, 0x01, 0x05 }; + + if (qcadev->btsoc_type == QCA_QCA6390) { + serdev_device_write_flush(serdev); + ret = serdev_device_write_buf(serdev, ibs_wake_cmd, + sizeof(ibs_wake_cmd)); + if (ret < 0) { + BT_ERR("QCA send IBS_WAKE_IND error: %d", ret); + return; + } + serdev_device_wait_until_sent(serdev, timeout); + usleep_range(8000, 10000); + + serdev_device_write_flush(serdev); + ret = serdev_device_write_buf(serdev, edl_reset_soc_cmd, + sizeof(edl_reset_soc_cmd)); + if (ret < 0) { + BT_ERR("QCA send EDL_RESET_REQ error: %d", ret); + return; + } + serdev_device_wait_until_sent(serdev, timeout); + usleep_range(8000, 10000); + } +} + static int __maybe_unused qca_suspend(struct device *dev) { - struct hci_dev *hdev = container_of(dev, struct hci_dev, dev); - struct hci_uart *hu = hci_get_drvdata(hdev); + struct serdev_device *serdev = to_serdev_device(dev); + struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev); + struct hci_uart *hu = &qcadev->serdev_hu; struct qca_data *qca = hu->priv; unsigned long flags; int ret = 0; @@ -2045,8 +2126,9 @@ error: static int __maybe_unused qca_resume(struct device *dev) { - struct hci_dev *hdev = container_of(dev, struct hci_dev, dev); - struct hci_uart *hu = hci_get_drvdata(hdev); + struct serdev_device *serdev = to_serdev_device(dev); + struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev); + struct hci_uart *hu = &qcadev->serdev_hu; struct qca_data *qca = hu->priv; clear_bit(QCA_SUSPENDING, &qca->flags); @@ -2088,6 +2170,7 @@ static struct serdev_device_driver qca_serdev_driver = { .name = "hci_uart_qca", .of_match_table = of_match_ptr(qca_bluetooth_of_match), .acpi_match_table = ACPI_PTR(qca_bluetooth_acpi_match), + .shutdown = qca_serdev_shutdown, .pm = &qca_pm_ops, }, }; diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig index 11ec6f466467..abb121f8de52 100644 --- a/drivers/clk/qcom/Kconfig +++ b/drivers/clk/qcom/Kconfig @@ -377,6 +377,7 @@ config SM_GCC_8150 config SM_GCC_8250 tristate "SM8250 Global Clock Controller" + select QCOM_GDSC help Support for the global clock controller on SM8250 devices. Say Y if you want to use peripheral devices such as UART, diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c index ef98fdc51755..732bc7c937e6 100644 --- a/drivers/clk/qcom/gcc-sm8150.c +++ b/drivers/clk/qcom/gcc-sm8150.c @@ -76,8 +76,7 @@ static struct clk_alpha_pll_postdiv gpll0_out_even = { .clkr.hw.init = &(struct clk_init_data){ .name = "gpll0_out_even", .parent_data = &(const struct clk_parent_data){ - .fw_name = "bi_tcxo", - .name = "bi_tcxo", + .hw = &gpll0.clkr.hw, }, .num_parents = 1, .ops = &clk_trion_pll_postdiv_ops, diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c index dccef3a2908b..e1401d9cc756 100644 --- a/drivers/crypto/chelsio/chtls/chtls_io.c +++ b/drivers/crypto/chelsio/chtls/chtls_io.c @@ -682,7 +682,7 @@ int chtls_push_frames(struct chtls_sock *csk, int comp) make_tx_data_wr(sk, skb, immdlen, len, credits_needed, completion); tp->snd_nxt += len; - tp->lsndtime = tcp_time_stamp(tp); + tp->lsndtime = tcp_jiffies32; if (completion) ULP_SKB_CB(skb)->flags &= ~ULPCB_FLAG_NEED_HDR; } else { diff --git a/drivers/gpio/gpio-bcm-kona.c b/drivers/gpio/gpio-bcm-kona.c index baee8c3f06ad..cf3687a7925f 100644 --- a/drivers/gpio/gpio-bcm-kona.c +++ b/drivers/gpio/gpio-bcm-kona.c @@ -625,7 +625,7 @@ static int bcm_kona_gpio_probe(struct platform_device *pdev) kona_gpio->reg_base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(kona_gpio->reg_base)) { - ret = -ENXIO; + ret = PTR_ERR(kona_gpio->reg_base); goto err_irq_domain; } diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c index da1ef0b1c291..b1accfba017d 100644 --- a/drivers/gpio/gpio-exar.c +++ b/drivers/gpio/gpio-exar.c @@ -148,8 +148,10 @@ static int gpio_exar_probe(struct platform_device *pdev) mutex_init(&exar_gpio->lock); index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL); - if (index < 0) - goto err_destroy; + if (index < 0) { + ret = index; + goto err_mutex_destroy; + } sprintf(exar_gpio->name, "exar_gpio%d", index); exar_gpio->gpio_chip.label = exar_gpio->name; @@ -176,6 +178,7 @@ static int gpio_exar_probe(struct platform_device *pdev) err_destroy: ida_simple_remove(&ida_index, index); +err_mutex_destroy: mutex_destroy(&exar_gpio->lock); return ret; } diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c index 7b7085050219..da570e63589d 100644 --- a/drivers/gpio/gpio-mlxbf2.c +++ b/drivers/gpio/gpio-mlxbf2.c @@ -127,8 +127,8 @@ static int mlxbf2_gpio_lock_acquire(struct mlxbf2_gpio_context *gs) { u32 arm_gpio_lock_val; - spin_lock(&gs->gc.bgpio_lock); mutex_lock(yu_arm_gpio_lock_param.lock); + spin_lock(&gs->gc.bgpio_lock); arm_gpio_lock_val = readl(yu_arm_gpio_lock_param.io); @@ -136,8 +136,8 @@ static int mlxbf2_gpio_lock_acquire(struct mlxbf2_gpio_context *gs) * When lock active bit[31] is set, ModeX is write enabled */ if (YU_LOCK_ACTIVE_BIT(arm_gpio_lock_val)) { - mutex_unlock(yu_arm_gpio_lock_param.lock); spin_unlock(&gs->gc.bgpio_lock); + mutex_unlock(yu_arm_gpio_lock_param.lock); return -EINVAL; } @@ -152,8 +152,8 @@ static int mlxbf2_gpio_lock_acquire(struct mlxbf2_gpio_context *gs) static void mlxbf2_gpio_lock_release(struct mlxbf2_gpio_context *gs) { writel(YU_ARM_GPIO_LOCK_RELEASE, yu_arm_gpio_lock_param.io); - mutex_unlock(yu_arm_gpio_lock_param.lock); spin_unlock(&gs->gc.bgpio_lock); + mutex_unlock(yu_arm_gpio_lock_param.lock); } /* diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c index 3c9f4fb3d5a2..bd65114eb170 100644 --- a/drivers/gpio/gpio-mvebu.c +++ b/drivers/gpio/gpio-mvebu.c @@ -782,6 +782,15 @@ static int mvebu_pwm_probe(struct platform_device *pdev, "marvell,armada-370-gpio")) return 0; + /* + * There are only two sets of PWM configuration registers for + * all the GPIO lines on those SoCs which this driver reserves + * for the first two GPIO chips. So if the resource is missing + * we can't treat it as an error. + */ + if (!platform_get_resource_byname(pdev, IORESOURCE_MEM, "pwm")) + return 0; + if (IS_ERR(mvchip->clk)) return PTR_ERR(mvchip->clk); @@ -804,12 +813,6 @@ static int mvebu_pwm_probe(struct platform_device *pdev, mvchip->mvpwm = mvpwm; mvpwm->mvchip = mvchip; - /* - * There are only two sets of PWM configuration registers for - * all the GPIO lines on those SoCs which this driver reserves - * for the first two GPIO chips. So if the resource is missing - * we can't treat it as an error. - */ mvpwm->membase = devm_platform_ioremap_resource_byname(pdev, "pwm"); if (IS_ERR(mvpwm->membase)) return PTR_ERR(mvpwm->membase); diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c index 1361270ecf8c..0cb6600b8eee 100644 --- a/drivers/gpio/gpio-pxa.c +++ b/drivers/gpio/gpio-pxa.c @@ -660,8 +660,8 @@ static int pxa_gpio_probe(struct platform_device *pdev) pchip->irq1 = irq1; gpio_reg_base = devm_platform_ioremap_resource(pdev, 0); - if (!gpio_reg_base) - return -EINVAL; + if (IS_ERR(gpio_reg_base)) + return PTR_ERR(gpio_reg_base); clk = clk_get(&pdev->dev, NULL); if (IS_ERR(clk)) { diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c index 182136d98b97..c14f0784274a 100644 --- a/drivers/gpio/gpiolib.c +++ b/drivers/gpio/gpiolib.c @@ -729,6 +729,10 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip) if (ret) goto out_free_descs; } + + atomic_notifier_call_chain(&desc->gdev->notifier, + GPIOLINE_CHANGED_REQUESTED, desc); + dev_dbg(&gdev->dev, "registered chardev handle for line %d\n", offset); } @@ -1083,6 +1087,9 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip) if (ret) goto out_free_desc; + atomic_notifier_call_chain(&desc->gdev->notifier, + GPIOLINE_CHANGED_REQUESTED, desc); + le->irq = gpiod_to_irq(desc); if (le->irq <= 0) { ret = -ENODEV; @@ -2998,8 +3005,6 @@ static int gpiod_request_commit(struct gpio_desc *desc, const char *label) } done: spin_unlock_irqrestore(&gpio_lock, flags); - atomic_notifier_call_chain(&desc->gdev->notifier, - GPIOLINE_CHANGED_REQUESTED, desc); return ret; } @@ -4215,7 +4220,9 @@ int gpiochip_lock_as_irq(struct gpio_chip *gc, unsigned int offset) } } - if (test_bit(FLAG_IS_OUT, &desc->flags)) { + /* To be valid for IRQ the line needs to be input or open drain */ + if (test_bit(FLAG_IS_OUT, &desc->flags) && + !test_bit(FLAG_OPEN_DRAIN, &desc->flags)) { chip_err(gc, "%s: tried to flag a GPIO set as output for IRQ\n", __func__); @@ -4278,7 +4285,12 @@ void gpiochip_enable_irq(struct gpio_chip *gc, unsigned int offset) if (!IS_ERR(desc) && !WARN_ON(!test_bit(FLAG_USED_AS_IRQ, &desc->flags))) { - WARN_ON(test_bit(FLAG_IS_OUT, &desc->flags)); + /* + * We must not be output when using IRQ UNLESS we are + * open drain. + */ + WARN_ON(test_bit(FLAG_IS_OUT, &desc->flags) && + !test_bit(FLAG_OPEN_DRAIN, &desc->flags)); set_bit(FLAG_IRQ_IS_ENABLED, &desc->flags); } } @@ -4961,6 +4973,9 @@ struct gpio_desc *__must_check gpiod_get_index(struct device *dev, return ERR_PTR(ret); } + atomic_notifier_call_chain(&desc->gdev->notifier, + GPIOLINE_CHANGED_REQUESTED, desc); + return desc; } EXPORT_SYMBOL_GPL(gpiod_get_index); @@ -5026,6 +5041,9 @@ struct gpio_desc *fwnode_get_named_gpiod(struct fwnode_handle *fwnode, return ERR_PTR(ret); } + atomic_notifier_call_chain(&desc->gdev->notifier, + GPIOLINE_CHANGED_REQUESTED, desc); + return desc; } EXPORT_SYMBOL_GPL(fwnode_get_named_gpiod); diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h index 4a3049841086..c24cad3c64ed 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h +++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h @@ -1050,7 +1050,7 @@ void kfd_dec_compute_active(struct kfd_dev *dev); /* Check with device cgroup if @kfd device is accessible */ static inline int kfd_devcgroup_check_permission(struct kfd_dev *kfd) { -#if defined(CONFIG_CGROUP_DEVICE) +#if defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF) struct drm_device *ddev = kfd->ddev; return devcgroup_check_permission(DEVCG_DEV_CHAR, ddev->driver->major, diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 28e651b173ab..7fc15b82fe48 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7880,13 +7880,6 @@ static int dm_update_plane_state(struct dc *dc, return -EINVAL; } - if (new_plane_state->crtc_x <= -new_acrtc->max_cursor_width || - new_plane_state->crtc_y <= -new_acrtc->max_cursor_height) { - DRM_DEBUG_ATOMIC("Bad cursor position %d, %d\n", - new_plane_state->crtc_x, new_plane_state->crtc_y); - return -EINVAL; - } - return 0; } diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c index 82fc3d5b3b2a..416afb99529d 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -1684,6 +1684,8 @@ static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx) return; /* Stall out until the cursor update completes. */ + if (vupdate_end < vupdate_start) + vupdate_end += stream->timing.v_total; us_vupdate = (vupdate_end - vupdate_start + 1) * us_per_line; udelay(us_to_vupdate + us_vupdate); } diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.c b/drivers/gpu/drm/ingenic/ingenic-drm.c index 1754c0547069..548cc25ea4ab 100644 --- a/drivers/gpu/drm/ingenic/ingenic-drm.c +++ b/drivers/gpu/drm/ingenic/ingenic-drm.c @@ -328,8 +328,8 @@ static int ingenic_drm_crtc_atomic_check(struct drm_crtc *crtc, if (!drm_atomic_crtc_needs_modeset(state)) return 0; - if (state->mode.hdisplay > priv->soc_info->max_height || - state->mode.vdisplay > priv->soc_info->max_width) + if (state->mode.hdisplay > priv->soc_info->max_width || + state->mode.vdisplay > priv->soc_info->max_height) return -EINVAL; rate = clk_round_rate(priv->pix_clk, @@ -474,7 +474,7 @@ static int ingenic_drm_encoder_atomic_check(struct drm_encoder *encoder, static irqreturn_t ingenic_drm_irq_handler(int irq, void *arg) { - struct ingenic_drm *priv = arg; + struct ingenic_drm *priv = drm_device_get_priv(arg); unsigned int state; regmap_read(priv->map, JZ_REG_LCD_STATE, &state); diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index bf8e149d3191..e0a5e897e4b1 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -153,9 +153,9 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, uobj->context = NULL; /* - * For DESTROY the usecnt is held write locked, the caller is expected - * to put it unlock and put the object when done with it. Only DESTROY - * can remove the IDR handle. + * For DESTROY the usecnt is not changed, the caller is expected to + * manage it via uobj_put_destroy(). Only DESTROY can remove the IDR + * handle. */ if (reason != RDMA_REMOVE_DESTROY) atomic_set(&uobj->usecnt, 0); @@ -187,7 +187,7 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, /* * This calls uverbs_destroy_uobject() using the RDMA_REMOVE_DESTROY * sequence. It should only be used from command callbacks. On success the - * caller must pair this with rdma_lookup_put_uobject(LOOKUP_WRITE). This + * caller must pair this with uobj_put_destroy(). This * version requires the caller to have already obtained an * LOOKUP_DESTROY uobject kref. */ @@ -198,6 +198,13 @@ int uobj_destroy(struct ib_uobject *uobj, struct uverbs_attr_bundle *attrs) down_read(&ufile->hw_destroy_rwsem); + /* + * Once the uobject is destroyed by RDMA_REMOVE_DESTROY then it is left + * write locked as the callers put it back with UVERBS_LOOKUP_DESTROY. + * This is because any other concurrent thread can still see the object + * in the xarray due to RCU. Leaving it locked ensures nothing else will + * touch it. + */ ret = uverbs_try_lock_object(uobj, UVERBS_LOOKUP_WRITE); if (ret) goto out_unlock; @@ -216,7 +223,7 @@ out_unlock: /* * uobj_get_destroy destroys the HW object and returns a handle to the uobj * with a NULL object pointer. The caller must pair this with - * uverbs_put_destroy. + * uobj_put_destroy(). */ struct ib_uobject *__uobj_get_destroy(const struct uverbs_api_object *obj, u32 id, struct uverbs_attr_bundle *attrs) @@ -250,8 +257,7 @@ int __uobj_perform_destroy(const struct uverbs_api_object *obj, u32 id, uobj = __uobj_get_destroy(obj, id, attrs); if (IS_ERR(uobj)) return PTR_ERR(uobj); - - rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); + uobj_put_destroy(uobj); return 0; } diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index a401931189b7..44683073be0c 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1439,6 +1439,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (is_odp_mr(mr)) { to_ib_umem_odp(mr->umem)->private = mr; + init_waitqueue_head(&mr->q_deferred_work); atomic_set(&mr->num_deferred_work, 0); err = xa_err(xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key), &mr->mmkey, diff --git a/drivers/infiniband/hw/qib/qib_sysfs.c b/drivers/infiniband/hw/qib/qib_sysfs.c index 568b21eb6ea1..021df0654ba7 100644 --- a/drivers/infiniband/hw/qib/qib_sysfs.c +++ b/drivers/infiniband/hw/qib/qib_sysfs.c @@ -760,7 +760,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num, qib_dev_err(dd, "Skipping linkcontrol sysfs info, (err %d) port %u\n", ret, port_num); - goto bail; + goto bail_link; } kobject_uevent(&ppd->pport_kobj, KOBJ_ADD); @@ -770,7 +770,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num, qib_dev_err(dd, "Skipping sl2vl sysfs info, (err %d) port %u\n", ret, port_num); - goto bail_link; + goto bail_sl; } kobject_uevent(&ppd->sl2vl_kobj, KOBJ_ADD); @@ -780,7 +780,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num, qib_dev_err(dd, "Skipping diag_counters sysfs info, (err %d) port %u\n", ret, port_num); - goto bail_sl; + goto bail_diagc; } kobject_uevent(&ppd->diagc_kobj, KOBJ_ADD); @@ -793,7 +793,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num, qib_dev_err(dd, "Skipping Congestion Control sysfs info, (err %d) port %u\n", ret, port_num); - goto bail_diagc; + goto bail_cc; } kobject_uevent(&ppd->pport_cc_kobj, KOBJ_ADD); @@ -854,6 +854,7 @@ void qib_verbs_unregister_sysfs(struct qib_devdata *dd) &cc_table_bin_attr); kobject_put(&ppd->pport_cc_kobj); } + kobject_put(&ppd->diagc_kobj); kobject_put(&ppd->sl2vl_kobj); kobject_put(&ppd->pport_kobj); } diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c index e580ae9cc55a..780fd2dfc07e 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c @@ -829,7 +829,7 @@ static int pvrdma_pci_probe(struct pci_dev *pdev, !(pci_resource_flags(pdev, 1) & IORESOURCE_MEM)) { dev_err(&pdev->dev, "PCI BAR region not MMIO\n"); ret = -ENOMEM; - goto err_free_device; + goto err_disable_pdev; } ret = pci_request_regions(pdev, DRV_NAME); diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h index e188a95984b5..9a3379c49541 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib.h +++ b/drivers/infiniband/ulp/ipoib/ipoib.h @@ -377,8 +377,12 @@ struct ipoib_dev_priv { struct ipoib_rx_buf *rx_ring; struct ipoib_tx_buf *tx_ring; + /* cyclic ring variables for managing tx_ring, for UD only */ unsigned int tx_head; unsigned int tx_tail; + /* cyclic ring variables for counting overall outstanding send WRs */ + unsigned int global_tx_head; + unsigned int global_tx_tail; struct ib_sge tx_sge[MAX_SKB_FRAGS + 1]; struct ib_ud_wr tx_wr; struct ib_wc send_wc[MAX_SEND_CQE]; diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c index c59e00a0881f..9bf0fa30df28 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c @@ -756,7 +756,8 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_ return; } - if ((priv->tx_head - priv->tx_tail) == ipoib_sendq_size - 1) { + if ((priv->global_tx_head - priv->global_tx_tail) == + ipoib_sendq_size - 1) { ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n", tx->qp->qp_num); netif_stop_queue(dev); @@ -786,7 +787,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_ } else { netif_trans_update(dev); ++tx->tx_head; - ++priv->tx_head; + ++priv->global_tx_head; } } @@ -820,10 +821,11 @@ void ipoib_cm_handle_tx_wc(struct net_device *dev, struct ib_wc *wc) netif_tx_lock(dev); ++tx->tx_tail; - ++priv->tx_tail; + ++priv->global_tx_tail; if (unlikely(netif_queue_stopped(dev) && - (priv->tx_head - priv->tx_tail) <= ipoib_sendq_size >> 1 && + ((priv->global_tx_head - priv->global_tx_tail) <= + ipoib_sendq_size >> 1) && test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags))) netif_wake_queue(dev); @@ -1232,8 +1234,9 @@ timeout: dev_kfree_skb_any(tx_req->skb); netif_tx_lock_bh(p->dev); ++p->tx_tail; - ++priv->tx_tail; - if (unlikely(priv->tx_head - priv->tx_tail == ipoib_sendq_size >> 1) && + ++priv->global_tx_tail; + if (unlikely((priv->global_tx_head - priv->global_tx_tail) <= + ipoib_sendq_size >> 1) && netif_queue_stopped(p->dev) && test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags)) netif_wake_queue(p->dev); diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c index c332b4761816..da3c5315bbb5 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c @@ -407,9 +407,11 @@ static void ipoib_ib_handle_tx_wc(struct net_device *dev, struct ib_wc *wc) dev_kfree_skb_any(tx_req->skb); ++priv->tx_tail; + ++priv->global_tx_tail; if (unlikely(netif_queue_stopped(dev) && - ((priv->tx_head - priv->tx_tail) <= ipoib_sendq_size >> 1) && + ((priv->global_tx_head - priv->global_tx_tail) <= + ipoib_sendq_size >> 1) && test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags))) netif_wake_queue(dev); @@ -634,7 +636,8 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb, else priv->tx_wr.wr.send_flags &= ~IB_SEND_IP_CSUM; /* increase the tx_head after send success, but use it for queue state */ - if (priv->tx_head - priv->tx_tail == ipoib_sendq_size - 1) { + if ((priv->global_tx_head - priv->global_tx_tail) == + ipoib_sendq_size - 1) { ipoib_dbg(priv, "TX ring full, stopping kernel net queue\n"); netif_stop_queue(dev); } @@ -662,6 +665,7 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb, rc = priv->tx_head; ++priv->tx_head; + ++priv->global_tx_head; } return rc; } @@ -807,6 +811,7 @@ int ipoib_ib_dev_stop_default(struct net_device *dev) ipoib_dma_unmap_tx(priv, tx_req); dev_kfree_skb_any(tx_req->skb); ++priv->tx_tail; + ++priv->global_tx_tail; } for (i = 0; i < ipoib_recvq_size; ++i) { diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index 81b8227214f1..ceec24d45185 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c @@ -1184,9 +1184,11 @@ static void ipoib_timeout(struct net_device *dev, unsigned int txqueue) ipoib_warn(priv, "transmit timeout: latency %d msecs\n", jiffies_to_msecs(jiffies - dev_trans_start(dev))); - ipoib_warn(priv, "queue stopped %d, tx_head %u, tx_tail %u\n", - netif_queue_stopped(dev), - priv->tx_head, priv->tx_tail); + ipoib_warn(priv, + "queue stopped %d, tx_head %u, tx_tail %u, global_tx_head %u, global_tx_tail %u\n", + netif_queue_stopped(dev), priv->tx_head, priv->tx_tail, + priv->global_tx_head, priv->global_tx_tail); + /* XXX reset QP, etc. */ } @@ -1701,7 +1703,7 @@ static int ipoib_dev_init_default(struct net_device *dev) goto out_rx_ring_cleanup; } - /* priv->tx_head, tx_tail & tx_outstanding are already 0 */ + /* priv->tx_head, tx_tail and global_tx_tail/head are already 0 */ if (ipoib_transport_dev_init(dev, priv->ca)) { pr_warn("%s: ipoib_transport_dev_init failed\n", diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c index cb6e3a5f509c..0d57e51b8ba1 100644 --- a/drivers/input/evdev.c +++ b/drivers/input/evdev.c @@ -326,20 +326,6 @@ static int evdev_fasync(int fd, struct file *file, int on) return fasync_helper(fd, file, on, &client->fasync); } -static int evdev_flush(struct file *file, fl_owner_t id) -{ - struct evdev_client *client = file->private_data; - struct evdev *evdev = client->evdev; - - mutex_lock(&evdev->mutex); - - if (evdev->exist && !client->revoked) - input_flush_device(&evdev->handle, file); - - mutex_unlock(&evdev->mutex); - return 0; -} - static void evdev_free(struct device *dev) { struct evdev *evdev = container_of(dev, struct evdev, dev); @@ -453,6 +439,10 @@ static int evdev_release(struct inode *inode, struct file *file) unsigned int i; mutex_lock(&evdev->mutex); + + if (evdev->exist && !client->revoked) + input_flush_device(&evdev->handle, file); + evdev_ungrab(evdev, client); mutex_unlock(&evdev->mutex); @@ -1310,7 +1300,6 @@ static const struct file_operations evdev_fops = { .compat_ioctl = evdev_ioctl_compat, #endif .fasync = evdev_fasync, - .flush = evdev_flush, .llseek = no_llseek, }; diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c index 6b40a1c68f9f..c77cdb3b62b5 100644 --- a/drivers/input/joystick/xpad.c +++ b/drivers/input/joystick/xpad.c @@ -459,6 +459,16 @@ static const u8 xboxone_fw2015_init[] = { }; /* + * This packet is required for Xbox One S (0x045e:0x02ea) + * and Xbox One Elite Series 2 (0x045e:0x0b00) pads to + * initialize the controller that was previously used in + * Bluetooth mode. + */ +static const u8 xboxone_s_init[] = { + 0x05, 0x20, 0x00, 0x0f, 0x06 +}; + +/* * This packet is required for the Titanfall 2 Xbox One pads * (0x0e6f:0x0165) to finish initialization and for Hori pads * (0x0f0d:0x0067) to make the analog sticks work. @@ -516,6 +526,8 @@ static const struct xboxone_init_packet xboxone_init_packets[] = { XBOXONE_INIT_PKT(0x0e6f, 0x0165, xboxone_hori_init), XBOXONE_INIT_PKT(0x0f0d, 0x0067, xboxone_hori_init), XBOXONE_INIT_PKT(0x0000, 0x0000, xboxone_fw2015_init), + XBOXONE_INIT_PKT(0x045e, 0x02ea, xboxone_s_init), + XBOXONE_INIT_PKT(0x045e, 0x0b00, xboxone_s_init), XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init1), XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init2), XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init), diff --git a/drivers/input/keyboard/applespi.c b/drivers/input/keyboard/applespi.c index d38398526965..14362ebab9a9 100644 --- a/drivers/input/keyboard/applespi.c +++ b/drivers/input/keyboard/applespi.c @@ -186,7 +186,7 @@ struct touchpad_protocol { u8 number_of_fingers; u8 clicked2; u8 unknown3[16]; - struct tp_finger fingers[0]; + struct tp_finger fingers[]; }; /** diff --git a/drivers/input/keyboard/cros_ec_keyb.c b/drivers/input/keyboard/cros_ec_keyb.c index 2b71c5a51f90..fc1793ca2f17 100644 --- a/drivers/input/keyboard/cros_ec_keyb.c +++ b/drivers/input/keyboard/cros_ec_keyb.c @@ -347,18 +347,14 @@ static int cros_ec_keyb_info(struct cros_ec_device *ec_dev, params->info_type = info_type; params->event_type = event_type; - ret = cros_ec_cmd_xfer(ec_dev, msg); - if (ret < 0) { - dev_warn(ec_dev->dev, "Transfer error %d/%d: %d\n", - (int)info_type, (int)event_type, ret); - } else if (msg->result == EC_RES_INVALID_VERSION) { + ret = cros_ec_cmd_xfer_status(ec_dev, msg); + if (ret == -ENOTSUPP) { /* With older ECs we just return 0 for everything */ memset(result, 0, result_size); ret = 0; - } else if (msg->result != EC_RES_SUCCESS) { - dev_warn(ec_dev->dev, "Error getting info %d/%d: %d\n", - (int)info_type, (int)event_type, msg->result); - ret = -EPROTO; + } else if (ret < 0) { + dev_warn(ec_dev->dev, "Transfer error %d/%d: %d\n", + (int)info_type, (int)event_type, ret); } else if (ret != result_size) { dev_warn(ec_dev->dev, "Wrong size %d/%d: %d != %zu\n", (int)info_type, (int)event_type, diff --git a/drivers/input/keyboard/dlink-dir685-touchkeys.c b/drivers/input/keyboard/dlink-dir685-touchkeys.c index b0ead7199c40..a69dcc3bd30c 100644 --- a/drivers/input/keyboard/dlink-dir685-touchkeys.c +++ b/drivers/input/keyboard/dlink-dir685-touchkeys.c @@ -143,7 +143,7 @@ MODULE_DEVICE_TABLE(of, dir685_tk_of_match); static struct i2c_driver dir685_tk_i2c_driver = { .driver = { - .name = "dlin-dir685-touchkeys", + .name = "dlink-dir685-touchkeys", .of_match_table = of_match_ptr(dir685_tk_of_match), }, .probe = dir685_tk_probe, diff --git a/drivers/input/misc/axp20x-pek.c b/drivers/input/misc/axp20x-pek.c index c8f87df93a50..9c6386b2af33 100644 --- a/drivers/input/misc/axp20x-pek.c +++ b/drivers/input/misc/axp20x-pek.c @@ -205,8 +205,11 @@ ATTRIBUTE_GROUPS(axp20x); static irqreturn_t axp20x_pek_irq(int irq, void *pwr) { - struct input_dev *idev = pwr; - struct axp20x_pek *axp20x_pek = input_get_drvdata(idev); + struct axp20x_pek *axp20x_pek = pwr; + struct input_dev *idev = axp20x_pek->input; + + if (!idev) + return IRQ_HANDLED; /* * The power-button is connected to ground so a falling edge (dbf) @@ -225,22 +228,9 @@ static irqreturn_t axp20x_pek_irq(int irq, void *pwr) static int axp20x_pek_probe_input_device(struct axp20x_pek *axp20x_pek, struct platform_device *pdev) { - struct axp20x_dev *axp20x = axp20x_pek->axp20x; struct input_dev *idev; int error; - axp20x_pek->irq_dbr = platform_get_irq_byname(pdev, "PEK_DBR"); - if (axp20x_pek->irq_dbr < 0) - return axp20x_pek->irq_dbr; - axp20x_pek->irq_dbr = regmap_irq_get_virq(axp20x->regmap_irqc, - axp20x_pek->irq_dbr); - - axp20x_pek->irq_dbf = platform_get_irq_byname(pdev, "PEK_DBF"); - if (axp20x_pek->irq_dbf < 0) - return axp20x_pek->irq_dbf; - axp20x_pek->irq_dbf = regmap_irq_get_virq(axp20x->regmap_irqc, - axp20x_pek->irq_dbf); - axp20x_pek->input = devm_input_allocate_device(&pdev->dev); if (!axp20x_pek->input) return -ENOMEM; @@ -255,24 +245,6 @@ static int axp20x_pek_probe_input_device(struct axp20x_pek *axp20x_pek, input_set_drvdata(idev, axp20x_pek); - error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbr, - axp20x_pek_irq, 0, - "axp20x-pek-dbr", idev); - if (error < 0) { - dev_err(&pdev->dev, "Failed to request dbr IRQ#%d: %d\n", - axp20x_pek->irq_dbr, error); - return error; - } - - error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbf, - axp20x_pek_irq, 0, - "axp20x-pek-dbf", idev); - if (error < 0) { - dev_err(&pdev->dev, "Failed to request dbf IRQ#%d: %d\n", - axp20x_pek->irq_dbf, error); - return error; - } - error = input_register_device(idev); if (error) { dev_err(&pdev->dev, "Can't register input device: %d\n", @@ -280,8 +252,6 @@ static int axp20x_pek_probe_input_device(struct axp20x_pek *axp20x_pek, return error; } - device_init_wakeup(&pdev->dev, true); - return 0; } @@ -339,6 +309,18 @@ static int axp20x_pek_probe(struct platform_device *pdev) axp20x_pek->axp20x = dev_get_drvdata(pdev->dev.parent); + axp20x_pek->irq_dbr = platform_get_irq_byname(pdev, "PEK_DBR"); + if (axp20x_pek->irq_dbr < 0) + return axp20x_pek->irq_dbr; + axp20x_pek->irq_dbr = regmap_irq_get_virq( + axp20x_pek->axp20x->regmap_irqc, axp20x_pek->irq_dbr); + + axp20x_pek->irq_dbf = platform_get_irq_byname(pdev, "PEK_DBF"); + if (axp20x_pek->irq_dbf < 0) + return axp20x_pek->irq_dbf; + axp20x_pek->irq_dbf = regmap_irq_get_virq( + axp20x_pek->axp20x->regmap_irqc, axp20x_pek->irq_dbf); + if (axp20x_pek_should_register_input(axp20x_pek, pdev)) { error = axp20x_pek_probe_input_device(axp20x_pek, pdev); if (error) @@ -347,6 +329,26 @@ static int axp20x_pek_probe(struct platform_device *pdev) axp20x_pek->info = (struct axp20x_info *)match->driver_data; + error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbr, + axp20x_pek_irq, 0, + "axp20x-pek-dbr", axp20x_pek); + if (error < 0) { + dev_err(&pdev->dev, "Failed to request dbr IRQ#%d: %d\n", + axp20x_pek->irq_dbr, error); + return error; + } + + error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbf, + axp20x_pek_irq, 0, + "axp20x-pek-dbf", axp20x_pek); + if (error < 0) { + dev_err(&pdev->dev, "Failed to request dbf IRQ#%d: %d\n", + axp20x_pek->irq_dbf, error); + return error; + } + + device_init_wakeup(&pdev->dev, true); + platform_set_drvdata(pdev, axp20x_pek); return 0; diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c index 4d2036209b45..758dae8d6500 100644 --- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -170,6 +170,7 @@ static const char * const smbus_pnp_ids[] = { "LEN005b", /* P50 */ "LEN005e", /* T560 */ "LEN006c", /* T470s */ + "LEN007a", /* T470s */ "LEN0071", /* T480 */ "LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */ "LEN0073", /* X1 Carbon G5 (Elantech) */ diff --git a/drivers/input/rmi4/rmi_driver.c b/drivers/input/rmi4/rmi_driver.c index 190b9974526b..258d5fe3d395 100644 --- a/drivers/input/rmi4/rmi_driver.c +++ b/drivers/input/rmi4/rmi_driver.c @@ -205,7 +205,7 @@ static irqreturn_t rmi_irq_fn(int irq, void *dev_id) if (count) { kfree(attn_data.data); - attn_data.data = NULL; + drvdata->attn_data.data = NULL; } if (!kfifo_is_empty(&drvdata->attn_fifo)) @@ -1210,7 +1210,8 @@ static int rmi_driver_probe(struct device *dev) if (data->input) { rmi_driver_set_input_name(rmi_dev, data->input); if (!rmi_dev->xport->input) { - if (input_register_device(data->input)) { + retval = input_register_device(data->input); + if (retval) { dev_err(dev, "%s: Failed to register input device.\n", __func__); goto err_destroy_functions; diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h index 08e919dbeb5d..7e048b557462 100644 --- a/drivers/input/serio/i8042-x86ia64io.h +++ b/drivers/input/serio/i8042-x86ia64io.h @@ -662,6 +662,13 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = { DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"), }, }, + { + /* Lenovo ThinkPad Twist S230u */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"), + }, + }, { } }; diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c index 14c577c16b16..2289f9638116 100644 --- a/drivers/input/touchscreen/elants_i2c.c +++ b/drivers/input/touchscreen/elants_i2c.c @@ -19,6 +19,7 @@ */ +#include <linux/bits.h> #include <linux/module.h> #include <linux/input.h> #include <linux/interrupt.h> @@ -73,6 +74,7 @@ #define FW_POS_STATE 1 #define FW_POS_TOTAL 2 #define FW_POS_XY 3 +#define FW_POS_TOOL_TYPE 33 #define FW_POS_CHECKSUM 34 #define FW_POS_WIDTH 35 #define FW_POS_PRESSURE 45 @@ -842,6 +844,7 @@ static void elants_i2c_mt_event(struct elants_data *ts, u8 *buf) { struct input_dev *input = ts->input; unsigned int n_fingers; + unsigned int tool_type; u16 finger_state; int i; @@ -852,6 +855,10 @@ static void elants_i2c_mt_event(struct elants_data *ts, u8 *buf) dev_dbg(&ts->client->dev, "n_fingers: %u, state: %04x\n", n_fingers, finger_state); + /* Note: all fingers have the same tool type */ + tool_type = buf[FW_POS_TOOL_TYPE] & BIT(0) ? + MT_TOOL_FINGER : MT_TOOL_PALM; + for (i = 0; i < MAX_CONTACT_NUM && n_fingers; i++) { if (finger_state & 1) { unsigned int x, y, p, w; @@ -867,7 +874,7 @@ static void elants_i2c_mt_event(struct elants_data *ts, u8 *buf) i, x, y, p, w); input_mt_slot(input, i); - input_mt_report_slot_state(input, MT_TOOL_FINGER, true); + input_mt_report_slot_state(input, tool_type, true); input_event(input, EV_ABS, ABS_MT_POSITION_X, x); input_event(input, EV_ABS, ABS_MT_POSITION_Y, y); input_event(input, EV_ABS, ABS_MT_PRESSURE, p); @@ -1307,6 +1314,8 @@ static int elants_i2c_probe(struct i2c_client *client, input_set_abs_params(ts->input, ABS_MT_POSITION_Y, 0, ts->y_max, 0, 0); input_set_abs_params(ts->input, ABS_MT_TOUCH_MAJOR, 0, 255, 0, 0); input_set_abs_params(ts->input, ABS_MT_PRESSURE, 0, 255, 0, 0); + input_set_abs_params(ts->input, ABS_MT_TOOL_TYPE, + 0, MT_TOOL_PALM, 0, 0); input_abs_set_res(ts->input, ABS_MT_POSITION_X, ts->x_res); input_abs_set_res(ts->input, ABS_MT_POSITION_Y, ts->y_res); input_abs_set_res(ts->input, ABS_MT_TOUCH_MAJOR, 1); diff --git a/drivers/input/touchscreen/mms114.c b/drivers/input/touchscreen/mms114.c index 69c6d559eeb0..2ef1adaed9af 100644 --- a/drivers/input/touchscreen/mms114.c +++ b/drivers/input/touchscreen/mms114.c @@ -91,15 +91,15 @@ static int __mms114_read_reg(struct mms114_data *data, unsigned int reg, if (reg <= MMS114_MODE_CONTROL && reg + len > MMS114_MODE_CONTROL) BUG(); - /* Write register: use repeated start */ + /* Write register */ xfer[0].addr = client->addr; - xfer[0].flags = I2C_M_TEN | I2C_M_NOSTART; + xfer[0].flags = client->flags & I2C_M_TEN; xfer[0].len = 1; xfer[0].buf = &buf; /* Read data */ xfer[1].addr = client->addr; - xfer[1].flags = I2C_M_RD; + xfer[1].flags = (client->flags & I2C_M_TEN) | I2C_M_RD; xfer[1].len = len; xfer[1].buf = val; @@ -428,10 +428,8 @@ static int mms114_probe(struct i2c_client *client, const void *match_data; int error; - if (!i2c_check_functionality(client->adapter, - I2C_FUNC_PROTOCOL_MANGLING)) { - dev_err(&client->dev, - "Need i2c bus that supports protocol mangling\n"); + if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) { + dev_err(&client->dev, "Not supported I2C adapter\n"); return -ENODEV; } diff --git a/drivers/input/touchscreen/usbtouchscreen.c b/drivers/input/touchscreen/usbtouchscreen.c index 16d70201de4a..397cb1d3f481 100644 --- a/drivers/input/touchscreen/usbtouchscreen.c +++ b/drivers/input/touchscreen/usbtouchscreen.c @@ -182,6 +182,7 @@ static const struct usb_device_id usbtouch_devices[] = { #endif #ifdef CONFIG_TOUCHSCREEN_USB_IRTOUCH + {USB_DEVICE(0x255e, 0x0001), .driver_info = DEVTYPE_IRTOUCH}, {USB_DEVICE(0x595a, 0x0001), .driver_info = DEVTYPE_IRTOUCH}, {USB_DEVICE(0x6615, 0x0001), .driver_info = DEVTYPE_IRTOUCH}, {USB_DEVICE(0x6615, 0x0012), .driver_info = DEVTYPE_IRTOUCH_HIRES}, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 1faa08c8bbb4..03d6a26687bc 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -510,7 +510,7 @@ struct iommu_group *iommu_group_alloc(void) NULL, "%d", group->id); if (ret) { ida_simple_remove(&iommu_group_ida, group->id); - kfree(group); + kobject_put(&group->kobj); return ERR_PTR(ret); } diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index c5367e2c8487..7896952de1ac 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2484,8 +2484,8 @@ static int mmc_rpmb_chrdev_release(struct inode *inode, struct file *filp) struct mmc_rpmb_data *rpmb = container_of(inode->i_cdev, struct mmc_rpmb_data, chrdev); - put_device(&rpmb->dev); mmc_blk_put(rpmb->md); + put_device(&rpmb->dev); return 0; } diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 3f716466fcfd..e368f2dabf20 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -4000,9 +4000,6 @@ int sdhci_setup_host(struct sdhci_host *host) mmc_hostname(mmc), host->version); } - if (host->quirks & SDHCI_QUIRK_BROKEN_CQE) - mmc->caps2 &= ~MMC_CAP2_CQE; - if (host->quirks & SDHCI_QUIRK_FORCE_DMA) host->flags |= SDHCI_USE_SDMA; else if (!(host->caps & SDHCI_CAN_DO_SDMA)) @@ -4539,6 +4536,12 @@ int __sdhci_add_host(struct sdhci_host *host) struct mmc_host *mmc = host->mmc; int ret; + if ((mmc->caps2 & MMC_CAP2_CQE) && + (host->quirks & SDHCI_QUIRK_BROKEN_CQE)) { + mmc->caps2 &= ~MMC_CAP2_CQE; + mmc->cqe_ops = NULL; + } + host->complete_wq = alloc_workqueue("sdhci", flags, 0); if (!host->complete_wq) return -ENOMEM; diff --git a/drivers/net/bonding/bond_sysfs_slave.c b/drivers/net/bonding/bond_sysfs_slave.c index 007481557191..9b8346638f69 100644 --- a/drivers/net/bonding/bond_sysfs_slave.c +++ b/drivers/net/bonding/bond_sysfs_slave.c @@ -149,8 +149,10 @@ int bond_sysfs_slave_add(struct slave *slave) err = kobject_init_and_add(&slave->kobj, &slave_ktype, &(slave->dev->dev.kobj), "bonding_slave"); - if (err) + if (err) { + kobject_put(&slave->kobj); return err; + } for (a = slave_attrs; *a; ++a) { err = sysfs_create_file(&slave->kobj, &((*a)->attr)); diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c index a6e272d2110d..66648986e6e3 100644 --- a/drivers/net/dsa/ocelot/felix.c +++ b/drivers/net/dsa/ocelot/felix.c @@ -103,13 +103,17 @@ static void felix_vlan_add(struct dsa_switch *ds, int port, const struct switchdev_obj_port_vlan *vlan) { struct ocelot *ocelot = ds->priv; + u16 flags = vlan->flags; u16 vid; int err; + if (dsa_is_cpu_port(ds, port)) + flags &= ~BRIDGE_VLAN_INFO_UNTAGGED; + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { err = ocelot_vlan_add(ocelot, port, vid, - vlan->flags & BRIDGE_VLAN_INFO_PVID, - vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED); + flags & BRIDGE_VLAN_INFO_PVID, + flags & BRIDGE_VLAN_INFO_UNTAGGED); if (err) { dev_err(ds->dev, "Failed to add VLAN %d to port %d: %d\n", vid, port, err); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index f86b6217f829..c62589c266b2 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4176,14 +4176,12 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, int i, intr_process, rc, tmo_count; struct input *req = msg; u32 *data = msg; - __le32 *resp_len; u8 *valid; u16 cp_ring_id, len = 0; struct hwrm_err_output *resp = bp->hwrm_cmd_resp_addr; u16 max_req_len = BNXT_HWRM_MAX_REQ_LEN; struct hwrm_short_input short_input = {0}; u32 doorbell_offset = BNXT_GRCPF_REG_CHIMP_COMM_TRIGGER; - u8 *resp_addr = (u8 *)bp->hwrm_cmd_resp_addr; u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM; u16 dst = BNXT_HWRM_CHNL_CHIMP; @@ -4201,7 +4199,6 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, bar_offset = BNXT_GRCPF_REG_KONG_COMM; doorbell_offset = BNXT_GRCPF_REG_KONG_COMM_TRIGGER; resp = bp->hwrm_cmd_kong_resp_addr; - resp_addr = (u8 *)bp->hwrm_cmd_kong_resp_addr; } memset(resp, 0, PAGE_SIZE); @@ -4270,7 +4267,6 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, tmo_count = HWRM_SHORT_TIMEOUT_COUNTER; timeout = timeout - HWRM_SHORT_MIN_TIMEOUT * HWRM_SHORT_TIMEOUT_COUNTER; tmo_count += DIV_ROUND_UP(timeout, HWRM_MIN_TIMEOUT); - resp_len = (__le32 *)(resp_addr + HWRM_RESP_LEN_OFFSET); if (intr_process) { u16 seq_id = bp->hwrm_intr_seq_id; @@ -4298,9 +4294,8 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, le16_to_cpu(req->req_type)); return -EBUSY; } - len = (le32_to_cpu(*resp_len) & HWRM_RESP_LEN_MASK) >> - HWRM_RESP_LEN_SFT; - valid = resp_addr + len - 1; + len = le16_to_cpu(resp->resp_len); + valid = ((u8 *)resp) + len - 1; } else { int j; @@ -4311,8 +4306,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, */ if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) return -EBUSY; - len = (le32_to_cpu(*resp_len) & HWRM_RESP_LEN_MASK) >> - HWRM_RESP_LEN_SFT; + len = le16_to_cpu(resp->resp_len); if (len) break; /* on first few passes, just barely sleep */ @@ -4334,7 +4328,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, } /* Last byte of resp contains valid bit */ - valid = resp_addr + len - 1; + valid = ((u8 *)resp) + len - 1; for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; j++) { /* make sure we read from updated DMA memory */ dma_rmb(); @@ -9333,7 +9327,7 @@ static void __bnxt_close_nic(struct bnxt *bp, bool irq_re_init, bnxt_free_skbs(bp); /* Save ring stats before shutdown */ - if (bp->bnapi) + if (bp->bnapi && irq_re_init) bnxt_get_ring_stats(bp, &bp->net_stats_prev); if (irq_re_init) { bnxt_free_irq(bp); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index c04ac4a36005..9e173d74b72a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -659,11 +659,6 @@ struct nqe_cn { #define HWRM_CMD_TIMEOUT (bp->hwrm_cmd_timeout) #define HWRM_RESET_TIMEOUT ((HWRM_CMD_TIMEOUT) * 4) #define HWRM_COREDUMP_TIMEOUT ((HWRM_CMD_TIMEOUT) * 12) -#define HWRM_RESP_ERR_CODE_MASK 0xffff -#define HWRM_RESP_LEN_OFFSET 4 -#define HWRM_RESP_LEN_MASK 0xffff0000 -#define HWRM_RESP_LEN_SFT 16 -#define HWRM_RESP_VALID_MASK 0xff000000 #define BNXT_HWRM_REQ_MAX_SIZE 128 #define BNXT_HWRM_REQS_PER_PAGE (BNXT_PAGE_SIZE / \ BNXT_HWRM_REQ_MAX_SIZE) @@ -1875,7 +1870,6 @@ struct bnxt { u8 dsn[8]; struct bnxt_tc_info *tc_info; struct list_head tc_indr_block_list; - struct notifier_block tc_netdev_nb; struct dentry *debugfs_pdev; struct device *hwmon_dev; }; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index dd0c3f227009..6b88143af5ea 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -2119,11 +2119,12 @@ int bnxt_flash_package_from_file(struct net_device *dev, const char *filename, bnxt_hwrm_fw_set_time(bp); - if (bnxt_find_nvram_item(dev, BNX_DIR_TYPE_UPDATE, - BNX_DIR_ORDINAL_FIRST, BNX_DIR_EXT_NONE, - &index, &item_len, NULL) != 0) { + rc = bnxt_find_nvram_item(dev, BNX_DIR_TYPE_UPDATE, + BNX_DIR_ORDINAL_FIRST, BNX_DIR_EXT_NONE, + &index, &item_len, NULL); + if (rc) { netdev_err(dev, "PKG update area not created in nvram\n"); - return -ENOBUFS; + return rc; } rc = request_firmware(&fw, filename, &dev->dev); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c index 782ea0771221..0eef4f5e4a46 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c @@ -1939,53 +1939,25 @@ static int bnxt_tc_setup_indr_block(struct net_device *netdev, struct bnxt *bp, return 0; } -static int bnxt_tc_setup_indr_cb(struct net_device *netdev, void *cb_priv, - enum tc_setup_type type, void *type_data) -{ - switch (type) { - case TC_SETUP_BLOCK: - return bnxt_tc_setup_indr_block(netdev, cb_priv, type_data); - default: - return -EOPNOTSUPP; - } -} - static bool bnxt_is_netdev_indr_offload(struct net_device *netdev) { return netif_is_vxlan(netdev); } -static int bnxt_tc_indr_block_event(struct notifier_block *nb, - unsigned long event, void *ptr) +static int bnxt_tc_setup_indr_cb(struct net_device *netdev, void *cb_priv, + enum tc_setup_type type, void *type_data) { - struct net_device *netdev; - struct bnxt *bp; - int rc; - - netdev = netdev_notifier_info_to_dev(ptr); if (!bnxt_is_netdev_indr_offload(netdev)) - return NOTIFY_OK; - - bp = container_of(nb, struct bnxt, tc_netdev_nb); + return -EOPNOTSUPP; - switch (event) { - case NETDEV_REGISTER: - rc = __flow_indr_block_cb_register(netdev, bp, - bnxt_tc_setup_indr_cb, - bp); - if (rc) - netdev_info(bp->dev, - "Failed to register indirect blk: dev: %s\n", - netdev->name); - break; - case NETDEV_UNREGISTER: - __flow_indr_block_cb_unregister(netdev, - bnxt_tc_setup_indr_cb, - bp); + switch (type) { + case TC_SETUP_BLOCK: + return bnxt_tc_setup_indr_block(netdev, cb_priv, type_data); + default: break; } - return NOTIFY_DONE; + return -EOPNOTSUPP; } static const struct rhashtable_params bnxt_tc_flow_ht_params = { @@ -2074,8 +2046,8 @@ int bnxt_init_tc(struct bnxt *bp) /* init indirect block notifications */ INIT_LIST_HEAD(&bp->tc_indr_block_list); - bp->tc_netdev_nb.notifier_call = bnxt_tc_indr_block_event; - rc = register_netdevice_notifier(&bp->tc_netdev_nb); + + rc = flow_indr_dev_register(bnxt_tc_setup_indr_cb, bp); if (!rc) return 0; @@ -2101,7 +2073,8 @@ void bnxt_shutdown_tc(struct bnxt *bp) if (!bnxt_tc_flower_enabled(bp)) return; - unregister_netdevice_notifier(&bp->tc_netdev_nb); + flow_indr_dev_unregister(bnxt_tc_setup_indr_cb, bp, + bnxt_tc_setup_indr_block_cb); rhashtable_destroy(&tc_info->flow_table); rhashtable_destroy(&tc_info->l2_table); rhashtable_destroy(&tc_info->decap_l2_table); diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c index 9d868403d86c..cbaa1924afbe 100644 --- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c +++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c @@ -234,6 +234,11 @@ static void octeon_mgmt_rx_fill_ring(struct net_device *netdev) /* Put it in the ring. */ p->rx_ring[p->rx_next_fill] = re.d64; + /* Make sure there is no reorder of filling the ring and ringing + * the bell + */ + wmb(); + dma_sync_single_for_device(p->dev, p->rx_ring_handle, ring_size_to_bytes(OCTEON_MGMT_RX_RING_SIZE), DMA_BIDIRECTIONAL); diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c index 6b1d3df4b9ba..9e3c6b36cde8 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c @@ -174,13 +174,14 @@ static int setup_sge_queues_uld(struct adapter *adap, unsigned int uld_type, bool lro) { struct sge_uld_rxq_info *rxq_info = adap->sge.uld_rxq_info[uld_type]; - int i, ret = 0; + int i, ret; - ret = !(!alloc_uld_rxqs(adap, rxq_info, lro)); + ret = alloc_uld_rxqs(adap, rxq_info, lro); + if (ret) + return ret; /* Tell uP to route control queue completions to rdma rspq */ - if (adap->flags & CXGB4_FULL_INIT_DONE && - !ret && uld_type == CXGB4_ULD_RDMA) { + if (adap->flags & CXGB4_FULL_INIT_DONE && uld_type == CXGB4_ULD_RDMA) { struct sge *s = &adap->sge; unsigned int cmplqid; u32 param, cmdop; diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index c4416a5f8816..2972244e6eb0 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2914,7 +2914,7 @@ static int dpaa_eth_probe(struct platform_device *pdev) } /* Do this here, so we can be verbose early */ - SET_NETDEV_DEV(net_dev, dev); + SET_NETDEV_DEV(net_dev, dev->parent); dev_set_drvdata(dev, net_dev); priv = netdev_priv(net_dev); diff --git a/drivers/net/ethernet/freescale/dpaa2/Kconfig b/drivers/net/ethernet/freescale/dpaa2/Kconfig index c6fb8e4021ac..feea797cde02 100644 --- a/drivers/net/ethernet/freescale/dpaa2/Kconfig +++ b/drivers/net/ethernet/freescale/dpaa2/Kconfig @@ -9,6 +9,16 @@ config FSL_DPAA2_ETH The driver manages network objects discovered on the Freescale MC bus. +if FSL_DPAA2_ETH +config FSL_DPAA2_ETH_DCB + bool "Data Center Bridging (DCB) Support" + default n + depends on DCB + help + Enable Priority-Based Flow Control (PFC) support for DPAA2 Ethernet + devices. +endif + config FSL_DPAA2_PTP_CLOCK tristate "Freescale DPAA2 PTP Clock" depends on FSL_DPAA2_ETH && PTP_1588_CLOCK_QORIQ diff --git a/drivers/net/ethernet/freescale/dpaa2/Makefile b/drivers/net/ethernet/freescale/dpaa2/Makefile index 69184ca3b7b9..6e7f33c956bf 100644 --- a/drivers/net/ethernet/freescale/dpaa2/Makefile +++ b/drivers/net/ethernet/freescale/dpaa2/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o obj-$(CONFIG_FSL_DPAA2_PTP_CLOCK) += fsl-dpaa2-ptp.o fsl-dpaa2-eth-objs := dpaa2-eth.o dpaa2-ethtool.o dpni.o dpaa2-mac.o dpmac.o +fsl-dpaa2-eth-${CONFIG_FSL_DPAA2_ETH_DCB} += dpaa2-eth-dcb.o fsl-dpaa2-eth-${CONFIG_DEBUG_FS} += dpaa2-eth-debugfs.o fsl-dpaa2-ptp-objs := dpaa2-ptp.o dprtc.o diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-dcb.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-dcb.c new file mode 100644 index 000000000000..83dee575c2fa --- /dev/null +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-dcb.c @@ -0,0 +1,150 @@ +// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) +/* Copyright 2020 NXP */ + +#include "dpaa2-eth.h" + +static int dpaa2_eth_dcbnl_ieee_getpfc(struct net_device *net_dev, + struct ieee_pfc *pfc) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + if (!(priv->link_state.options & DPNI_LINK_OPT_PFC_PAUSE)) + return 0; + + memcpy(pfc, &priv->pfc, sizeof(priv->pfc)); + pfc->pfc_cap = dpaa2_eth_tc_count(priv); + + return 0; +} + +static inline bool is_prio_enabled(u8 pfc_en, u8 tc) +{ + return !!(pfc_en & (1 << tc)); +} + +static int set_pfc_cn(struct dpaa2_eth_priv *priv, u8 pfc_en) +{ + struct dpni_congestion_notification_cfg cfg = {0}; + int i, err; + + cfg.notification_mode = DPNI_CONG_OPT_FLOW_CONTROL; + cfg.units = DPNI_CONGESTION_UNIT_FRAMES; + cfg.message_iova = 0ULL; + cfg.message_ctx = 0ULL; + + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + if (is_prio_enabled(pfc_en, i)) { + cfg.threshold_entry = DPAA2_ETH_CN_THRESH_ENTRY(priv); + cfg.threshold_exit = DPAA2_ETH_CN_THRESH_EXIT(priv); + } else { + /* For priorities not set in the pfc_en mask, we leave + * the congestion thresholds at zero, which effectively + * disables generation of PFC frames for them + */ + cfg.threshold_entry = 0; + cfg.threshold_exit = 0; + } + + err = dpni_set_congestion_notification(priv->mc_io, 0, + priv->mc_token, + DPNI_QUEUE_RX, i, &cfg); + if (err) { + netdev_err(priv->net_dev, + "dpni_set_congestion_notification failed\n"); + return err; + } + } + + return 0; +} + +static int dpaa2_eth_dcbnl_ieee_setpfc(struct net_device *net_dev, + struct ieee_pfc *pfc) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + struct dpni_link_cfg link_cfg = {0}; + bool tx_pause; + int err; + + if (pfc->mbc || pfc->delay) + return -EOPNOTSUPP; + + /* If same PFC enabled mask, nothing to do */ + if (priv->pfc.pfc_en == pfc->pfc_en) + return 0; + + /* We allow PFC configuration even if it won't have any effect until + * general pause frames are enabled + */ + tx_pause = dpaa2_eth_tx_pause_enabled(priv->link_state.options); + if (!dpaa2_eth_rx_pause_enabled(priv->link_state.options) || !tx_pause) + netdev_warn(net_dev, "Pause support must be enabled in order for PFC to work!\n"); + + link_cfg.rate = priv->link_state.rate; + link_cfg.options = priv->link_state.options; + if (pfc->pfc_en) + link_cfg.options |= DPNI_LINK_OPT_PFC_PAUSE; + else + link_cfg.options &= ~DPNI_LINK_OPT_PFC_PAUSE; + err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &link_cfg); + if (err) { + netdev_err(net_dev, "dpni_set_link_cfg failed\n"); + return err; + } + + /* Configure congestion notifications for the enabled priorities */ + err = set_pfc_cn(priv, pfc->pfc_en); + if (err) + return err; + + memcpy(&priv->pfc, pfc, sizeof(priv->pfc)); + priv->pfc_enabled = !!pfc->pfc_en; + + dpaa2_eth_set_rx_taildrop(priv, tx_pause, priv->pfc_enabled); + + return 0; +} + +static u8 dpaa2_eth_dcbnl_getdcbx(struct net_device *net_dev) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + return priv->dcbx_mode; +} + +static u8 dpaa2_eth_dcbnl_setdcbx(struct net_device *net_dev, u8 mode) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + return (mode != (priv->dcbx_mode)) ? 1 : 0; +} + +static u8 dpaa2_eth_dcbnl_getcap(struct net_device *net_dev, int capid, u8 *cap) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + switch (capid) { + case DCB_CAP_ATTR_PFC: + *cap = true; + break; + case DCB_CAP_ATTR_PFC_TCS: + *cap = 1 << (dpaa2_eth_tc_count(priv) - 1); + break; + case DCB_CAP_ATTR_DCBX: + *cap = priv->dcbx_mode; + break; + default: + *cap = false; + break; + } + + return 0; +} + +const struct dcbnl_rtnl_ops dpaa2_eth_dcbnl_ops = { + .ieee_getpfc = dpaa2_eth_dcbnl_ieee_getpfc, + .ieee_setpfc = dpaa2_eth_dcbnl_ieee_setpfc, + .getdcbx = dpaa2_eth_dcbnl_getdcbx, + .setdcbx = dpaa2_eth_dcbnl_setdcbx, + .getcap = dpaa2_eth_dcbnl_getcap, +}; diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c index 0a31e4268dfb..c453a23045c1 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c @@ -81,8 +81,8 @@ static int dpaa2_dbg_fqs_show(struct seq_file *file, void *offset) int i, err; seq_printf(file, "FQ stats for %s:\n", priv->net_dev->name); - seq_printf(file, "%s%16s%16s%16s%16s\n", - "VFQID", "CPU", "Type", "Frames", "Pending frames"); + seq_printf(file, "%s%16s%16s%16s%16s%16s\n", + "VFQID", "CPU", "TC", "Type", "Frames", "Pending frames"); for (i = 0; i < priv->num_fqs; i++) { fq = &priv->fq[i]; @@ -90,9 +90,10 @@ static int dpaa2_dbg_fqs_show(struct seq_file *file, void *offset) if (err) fcnt = 0; - seq_printf(file, "%5d%16d%16s%16llu%16u\n", + seq_printf(file, "%5d%16d%16d%16s%16llu%16u\n", fq->fqid, fq->target_cpu, + fq->tc, fq_type_to_str(fq), fq->stats.frames, fcnt); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index fe3806d54630..8fb48de5d18c 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -1287,31 +1287,67 @@ static void disable_ch_napi(struct dpaa2_eth_priv *priv) } } -static void dpaa2_eth_set_rx_taildrop(struct dpaa2_eth_priv *priv, bool enable) +void dpaa2_eth_set_rx_taildrop(struct dpaa2_eth_priv *priv, + bool tx_pause, bool pfc) { struct dpni_taildrop td = {0}; + struct dpaa2_eth_fq *fq; int i, err; - if (priv->rx_td_enabled == enable) - return; + /* FQ taildrop: threshold is in bytes, per frame queue. Enabled if + * flow control is disabled (as it might interfere with either the + * buffer pool depletion trigger for pause frames or with the group + * congestion trigger for PFC frames) + */ + td.enable = !tx_pause; + if (priv->rx_fqtd_enabled == td.enable) + goto set_cgtd; - td.enable = enable; - td.threshold = DPAA2_ETH_TAILDROP_THRESH; + td.threshold = DPAA2_ETH_FQ_TAILDROP_THRESH; + td.units = DPNI_CONGESTION_UNIT_BYTES; for (i = 0; i < priv->num_fqs; i++) { - if (priv->fq[i].type != DPAA2_RX_FQ) + fq = &priv->fq[i]; + if (fq->type != DPAA2_RX_FQ) continue; err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token, - DPNI_CP_QUEUE, DPNI_QUEUE_RX, 0, - priv->fq[i].flowid, &td); + DPNI_CP_QUEUE, DPNI_QUEUE_RX, + fq->tc, fq->flowid, &td); if (err) { netdev_err(priv->net_dev, - "dpni_set_taildrop() failed\n"); - break; + "dpni_set_taildrop(FQ) failed\n"); + return; } } - priv->rx_td_enabled = enable; + priv->rx_fqtd_enabled = td.enable; + +set_cgtd: + /* Congestion group taildrop: threshold is in frames, per group + * of FQs belonging to the same traffic class + * Enabled if general Tx pause disabled or if PFCs are enabled + * (congestion group threhsold for PFC generation is lower than the + * CG taildrop threshold, so it won't interfere with it; we also + * want frames in non-PFC enabled traffic classes to be kept in check) + */ + td.enable = !tx_pause || (tx_pause && pfc); + if (priv->rx_cgtd_enabled == td.enable) + return; + + td.threshold = DPAA2_ETH_CG_TAILDROP_THRESH(priv); + td.units = DPNI_CONGESTION_UNIT_FRAMES; + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token, + DPNI_CP_GROUP, DPNI_QUEUE_RX, + i, 0, &td); + if (err) { + netdev_err(priv->net_dev, + "dpni_set_taildrop(CG) failed\n"); + return; + } + } + + priv->rx_cgtd_enabled = td.enable; } static int link_state_update(struct dpaa2_eth_priv *priv) @@ -1331,9 +1367,8 @@ static int link_state_update(struct dpaa2_eth_priv *priv) * Rx FQ taildrop configuration as well. We configure taildrop * only when pause frame generation is disabled. */ - tx_pause = !!(state.options & DPNI_LINK_OPT_PAUSE) ^ - !!(state.options & DPNI_LINK_OPT_ASYM_PAUSE); - dpaa2_eth_set_rx_taildrop(priv, !tx_pause); + tx_pause = dpaa2_eth_tx_pause_enabled(state.options); + dpaa2_eth_set_rx_taildrop(priv, tx_pause, priv->pfc_enabled); /* When we manage the MAC/PHY using phylink there is no need * to manually update the netif_carrier. @@ -2407,7 +2442,7 @@ static void set_fq_affinity(struct dpaa2_eth_priv *priv) static void setup_fqs(struct dpaa2_eth_priv *priv) { - int i; + int i, j; /* We have one TxConf FQ per Tx flow. * The number of Tx and Rx queues is the same. @@ -2419,10 +2454,13 @@ static void setup_fqs(struct dpaa2_eth_priv *priv) priv->fq[priv->num_fqs++].flowid = (u16)i; } - for (i = 0; i < dpaa2_eth_queue_count(priv); i++) { - priv->fq[priv->num_fqs].type = DPAA2_RX_FQ; - priv->fq[priv->num_fqs].consume = dpaa2_eth_rx; - priv->fq[priv->num_fqs++].flowid = (u16)i; + for (j = 0; j < dpaa2_eth_tc_count(priv); j++) { + for (i = 0; i < dpaa2_eth_queue_count(priv); i++) { + priv->fq[priv->num_fqs].type = DPAA2_RX_FQ; + priv->fq[priv->num_fqs].consume = dpaa2_eth_rx; + priv->fq[priv->num_fqs].tc = (u8)j; + priv->fq[priv->num_fqs++].flowid = (u16)i; + } } /* For each FQ, decide on which core to process incoming frames */ @@ -2691,6 +2729,118 @@ out_err: priv->enqueue = dpaa2_eth_enqueue_qd; } +/* Configure ingress classification based on VLAN PCP */ +static int set_vlan_qos(struct dpaa2_eth_priv *priv) +{ + struct device *dev = priv->net_dev->dev.parent; + struct dpkg_profile_cfg kg_cfg = {0}; + struct dpni_qos_tbl_cfg qos_cfg = {0}; + struct dpni_rule_cfg key_params; + void *dma_mem, *key, *mask; + u8 key_size = 2; /* VLAN TCI field */ + int i, pcp, err; + + /* VLAN-based classification only makes sense if we have multiple + * traffic classes. + * Also, we need to extract just the 3-bit PCP field from the VLAN + * header and we can only do that by using a mask + */ + if (dpaa2_eth_tc_count(priv) == 1 || !dpaa2_eth_fs_mask_enabled(priv)) { + dev_dbg(dev, "VLAN-based QoS classification not supported\n"); + return -EOPNOTSUPP; + } + + dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_KERNEL); + if (!dma_mem) + return -ENOMEM; + + kg_cfg.num_extracts = 1; + kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR; + kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN; + kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD; + kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI; + + err = dpni_prepare_key_cfg(&kg_cfg, dma_mem); + if (err) { + dev_err(dev, "dpni_prepare_key_cfg failed\n"); + goto out_free_tbl; + } + + /* set QoS table */ + qos_cfg.default_tc = 0; + qos_cfg.discard_on_miss = 0; + qos_cfg.key_cfg_iova = dma_map_single(dev, dma_mem, + DPAA2_CLASSIFIER_DMA_SIZE, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, qos_cfg.key_cfg_iova)) { + dev_err(dev, "QoS table DMA mapping failed\n"); + err = -ENOMEM; + goto out_free_tbl; + } + + err = dpni_set_qos_table(priv->mc_io, 0, priv->mc_token, &qos_cfg); + if (err) { + dev_err(dev, "dpni_set_qos_table failed\n"); + goto out_unmap_tbl; + } + + /* Add QoS table entries */ + key = kzalloc(key_size * 2, GFP_KERNEL); + if (!key) { + err = -ENOMEM; + goto out_unmap_tbl; + } + mask = key + key_size; + *(__be16 *)mask = cpu_to_be16(VLAN_PRIO_MASK); + + key_params.key_iova = dma_map_single(dev, key, key_size * 2, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, key_params.key_iova)) { + dev_err(dev, "Qos table entry DMA mapping failed\n"); + err = -ENOMEM; + goto out_free_key; + } + + key_params.mask_iova = key_params.key_iova + key_size; + key_params.key_size = key_size; + + /* We add rules for PCP-based distribution starting with highest + * priority (VLAN PCP = 7). If this DPNI doesn't have enough traffic + * classes to accommodate all priority levels, the lowest ones end up + * on TC 0 which was configured as default + */ + for (i = dpaa2_eth_tc_count(priv) - 1, pcp = 7; i >= 0; i--, pcp--) { + *(__be16 *)key = cpu_to_be16(pcp << VLAN_PRIO_SHIFT); + dma_sync_single_for_device(dev, key_params.key_iova, + key_size * 2, DMA_TO_DEVICE); + + err = dpni_add_qos_entry(priv->mc_io, 0, priv->mc_token, + &key_params, i, i); + if (err) { + dev_err(dev, "dpni_add_qos_entry failed\n"); + dpni_clear_qos_table(priv->mc_io, 0, priv->mc_token); + goto out_unmap_key; + } + } + + priv->vlan_cls_enabled = true; + + /* Table and key memory is not persistent, clean everything up after + * configuration is finished + */ +out_unmap_key: + dma_unmap_single(dev, key_params.key_iova, key_size * 2, DMA_TO_DEVICE); +out_free_key: + kfree(key); +out_unmap_tbl: + dma_unmap_single(dev, qos_cfg.key_cfg_iova, DPAA2_CLASSIFIER_DMA_SIZE, + DMA_TO_DEVICE); +out_free_tbl: + kfree(dma_mem); + + return err; +} + /* Configure the DPNI object this interface is associated with */ static int setup_dpni(struct fsl_mc_device *ls_dev) { @@ -2753,6 +2903,10 @@ static int setup_dpni(struct fsl_mc_device *ls_dev) goto close; } + err = set_vlan_qos(priv); + if (err && err != -EOPNOTSUPP) + goto close; + priv->cls_rules = devm_kzalloc(dev, sizeof(struct dpaa2_eth_cls_rule) * dpaa2_eth_fs_count(priv), GFP_KERNEL); if (!priv->cls_rules) { @@ -2789,7 +2943,7 @@ static int setup_rx_flow(struct dpaa2_eth_priv *priv, int err; err = dpni_get_queue(priv->mc_io, 0, priv->mc_token, - DPNI_QUEUE_RX, 0, fq->flowid, &queue, &qid); + DPNI_QUEUE_RX, fq->tc, fq->flowid, &queue, &qid); if (err) { dev_err(dev, "dpni_get_queue(RX) failed\n"); return err; @@ -2802,7 +2956,7 @@ static int setup_rx_flow(struct dpaa2_eth_priv *priv, queue.destination.priority = 1; queue.user_context = (u64)(uintptr_t)fq; err = dpni_set_queue(priv->mc_io, 0, priv->mc_token, - DPNI_QUEUE_RX, 0, fq->flowid, + DPNI_QUEUE_RX, fq->tc, fq->flowid, DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST, &queue); if (err) { @@ -2811,6 +2965,10 @@ static int setup_rx_flow(struct dpaa2_eth_priv *priv, } /* xdp_rxq setup */ + /* only once for each channel */ + if (fq->tc > 0) + return 0; + err = xdp_rxq_info_reg(&fq->channel->xdp_rxq, priv->net_dev, fq->flowid); if (err) { @@ -2948,7 +3106,7 @@ static int config_legacy_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) { struct device *dev = priv->net_dev->dev.parent; struct dpni_rx_tc_dist_cfg dist_cfg; - int err; + int i, err = 0; memset(&dist_cfg, 0, sizeof(dist_cfg)); @@ -2956,9 +3114,14 @@ static int config_legacy_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) dist_cfg.dist_size = dpaa2_eth_queue_count(priv); dist_cfg.dist_mode = DPNI_DIST_MODE_HASH; - err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, 0, &dist_cfg); - if (err) - dev_err(dev, "dpni_set_rx_tc_dist failed\n"); + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, + i, &dist_cfg); + if (err) { + dev_err(dev, "dpni_set_rx_tc_dist failed\n"); + break; + } + } return err; } @@ -2968,7 +3131,7 @@ static int config_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) { struct device *dev = priv->net_dev->dev.parent; struct dpni_rx_dist_cfg dist_cfg; - int err; + int i, err = 0; memset(&dist_cfg, 0, sizeof(dist_cfg)); @@ -2976,9 +3139,15 @@ static int config_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) dist_cfg.dist_size = dpaa2_eth_queue_count(priv); dist_cfg.enable = 1; - err = dpni_set_rx_hash_dist(priv->mc_io, 0, priv->mc_token, &dist_cfg); - if (err) - dev_err(dev, "dpni_set_rx_hash_dist failed\n"); + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + dist_cfg.tc = i; + err = dpni_set_rx_hash_dist(priv->mc_io, 0, priv->mc_token, + &dist_cfg); + if (err) { + dev_err(dev, "dpni_set_rx_hash_dist failed\n"); + break; + } + } return err; } @@ -2988,7 +3157,7 @@ static int config_cls_key(struct dpaa2_eth_priv *priv, dma_addr_t key) { struct device *dev = priv->net_dev->dev.parent; struct dpni_rx_dist_cfg dist_cfg; - int err; + int i, err = 0; memset(&dist_cfg, 0, sizeof(dist_cfg)); @@ -2996,9 +3165,15 @@ static int config_cls_key(struct dpaa2_eth_priv *priv, dma_addr_t key) dist_cfg.dist_size = dpaa2_eth_queue_count(priv); dist_cfg.enable = 1; - err = dpni_set_rx_fs_dist(priv->mc_io, 0, priv->mc_token, &dist_cfg); - if (err) - dev_err(dev, "dpni_set_rx_fs_dist failed\n"); + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + dist_cfg.tc = i; + err = dpni_set_rx_fs_dist(priv->mc_io, 0, priv->mc_token, + &dist_cfg); + if (err) { + dev_err(dev, "dpni_set_rx_fs_dist failed\n"); + break; + } + } return err; } @@ -3684,6 +3859,15 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev) if (err) goto err_alloc_rings; +#ifdef CONFIG_FSL_DPAA2_ETH_DCB + if (dpaa2_eth_has_pause_support(priv) && priv->vlan_cls_enabled) { + priv->dcbx_mode = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE; + net_dev->dcbnl_ops = &dpaa2_eth_dcbnl_ops; + } else { + dev_dbg(dev, "PFC not supported\n"); + } +#endif + err = setup_irqs(dpni_dev); if (err) { netdev_warn(net_dev, "Failed to set link interrupt, fall back to polling\n"); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h index 0581fbf1f98c..2d7ada0f0dbd 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h @@ -6,6 +6,7 @@ #ifndef __DPAA2_ETH_H #define __DPAA2_ETH_H +#include <linux/dcbnl.h> #include <linux/netdevice.h> #include <linux/if_vlan.h> #include <linux/fsl/mc.h> @@ -36,27 +37,46 @@ /* Convert L3 MTU to L2 MFL */ #define DPAA2_ETH_L2_MAX_FRM(mtu) ((mtu) + VLAN_ETH_HLEN) -/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo - * frames in the Rx queues (length of the current frame is not - * taken into account when making the taildrop decision) +/* Set the taildrop threshold (in bytes) to allow the enqueue of a large + * enough number of jumbo frames in the Rx queues (length of the current + * frame is not taken into account when making the taildrop decision) */ -#define DPAA2_ETH_TAILDROP_THRESH (64 * 1024) +#define DPAA2_ETH_FQ_TAILDROP_THRESH (1024 * 1024) /* Maximum number of Tx confirmation frames to be processed * in a single NAPI call */ #define DPAA2_ETH_TXCONF_PER_NAPI 256 -/* Buffer quota per queue. Must be large enough such that for minimum sized - * frames taildrop kicks in before the bpool gets depleted, so we compute - * how many 64B frames fit inside the taildrop threshold and add a margin - * to accommodate the buffer refill delay. +/* Buffer qouta per channel. We want to keep in check number of ingress frames + * in flight: for small sized frames, congestion group taildrop may kick in + * first; for large sizes, Rx FQ taildrop threshold will ensure only a + * reasonable number of frames will be pending at any given time. + * Ingress frame drop due to buffer pool depletion should be a corner case only */ -#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE (DPAA2_ETH_TAILDROP_THRESH / 64) -#define DPAA2_ETH_NUM_BUFS (DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256) +#define DPAA2_ETH_NUM_BUFS 1280 #define DPAA2_ETH_REFILL_THRESH \ (DPAA2_ETH_NUM_BUFS - DPAA2_ETH_BUFS_PER_CMD) +/* Congestion group taildrop threshold: number of frames allowed to accumulate + * at any moment in a group of Rx queues belonging to the same traffic class. + * Choose value such that we don't risk depleting the buffer pool before the + * taildrop kicks in + */ +#define DPAA2_ETH_CG_TAILDROP_THRESH(priv) \ + (1024 * dpaa2_eth_queue_count(priv) / dpaa2_eth_tc_count(priv)) + +/* Congestion group notification threshold: when this many frames accumulate + * on the Rx queues belonging to the same TC, the MAC is instructed to send + * PFC frames for that TC. + * When number of pending frames drops below exit threshold transmission of + * PFC frames is stopped. + */ +#define DPAA2_ETH_CN_THRESH_ENTRY(priv) \ + (DPAA2_ETH_CG_TAILDROP_THRESH(priv) / 2) +#define DPAA2_ETH_CN_THRESH_EXIT(priv) \ + (DPAA2_ETH_CN_THRESH_ENTRY(priv) * 3 / 4) + /* Maximum number of buffers that can be acquired/released through a single * QBMan command */ @@ -294,7 +314,9 @@ struct dpaa2_eth_ch_stats { /* Maximum number of queues associated with a DPNI */ #define DPAA2_ETH_MAX_TCS 8 -#define DPAA2_ETH_MAX_RX_QUEUES 16 +#define DPAA2_ETH_MAX_RX_QUEUES_PER_TC 16 +#define DPAA2_ETH_MAX_RX_QUEUES \ + (DPAA2_ETH_MAX_RX_QUEUES_PER_TC * DPAA2_ETH_MAX_TCS) #define DPAA2_ETH_MAX_TX_QUEUES 16 #define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \ DPAA2_ETH_MAX_TX_QUEUES) @@ -414,7 +436,8 @@ struct dpaa2_eth_priv { struct dpaa2_eth_drv_stats __percpu *percpu_extras; u16 mc_token; - u8 rx_td_enabled; + u8 rx_fqtd_enabled; + u8 rx_cgtd_enabled; struct dpni_link_state link_state; bool do_link_poll; @@ -425,6 +448,12 @@ struct dpaa2_eth_priv { u64 rx_cls_fields; struct dpaa2_eth_cls_rule *cls_rules; u8 rx_cls_enabled; + u8 vlan_cls_enabled; + u8 pfc_enabled; +#ifdef CONFIG_FSL_DPAA2_ETH_DCB + u8 dcbx_mode; + struct ieee_pfc pfc; +#endif struct bpf_prog *xdp_prog; #ifdef CONFIG_DEBUG_FS struct dpaa2_debugfs dbg; @@ -507,6 +536,17 @@ enum dpaa2_eth_rx_dist { (dpaa2_eth_cmp_dpni_ver((priv), DPNI_PAUSE_VER_MAJOR, \ DPNI_PAUSE_VER_MINOR) >= 0) +static inline bool dpaa2_eth_tx_pause_enabled(u64 link_options) +{ + return !!(link_options & DPNI_LINK_OPT_PAUSE) ^ + !!(link_options & DPNI_LINK_OPT_ASYM_PAUSE); +} + +static inline bool dpaa2_eth_rx_pause_enabled(u64 link_options) +{ + return !!(link_options & DPNI_LINK_OPT_PAUSE); +} + static inline unsigned int dpaa2_eth_needed_headroom(struct dpaa2_eth_priv *priv, struct sk_buff *skb) @@ -546,4 +586,9 @@ int dpaa2_eth_cls_key_size(u64 key); int dpaa2_eth_cls_fld_off(int prot, int field); void dpaa2_eth_cls_trim_rule(void *key_mem, u64 fields); +void dpaa2_eth_set_rx_taildrop(struct dpaa2_eth_priv *priv, + bool tx_pause, bool pfc); + +extern const struct dcbnl_rtnl_ops dpaa2_eth_dcbnl_ops; + #endif /* __DPAA2_H */ diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c index 049afd1d6252..e88269fe3de7 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c @@ -130,9 +130,8 @@ static void dpaa2_eth_get_pauseparam(struct net_device *net_dev, return; } - pause->rx_pause = !!(link_options & DPNI_LINK_OPT_PAUSE); - pause->tx_pause = pause->rx_pause ^ - !!(link_options & DPNI_LINK_OPT_ASYM_PAUSE); + pause->rx_pause = dpaa2_eth_rx_pause_enabled(link_options); + pause->tx_pause = dpaa2_eth_tx_pause_enabled(link_options); pause->autoneg = AUTONEG_DISABLE; } @@ -547,7 +546,7 @@ static int do_cls_rule(struct net_device *net_dev, dma_addr_t key_iova; u64 fields = 0; void *key_buf; - int err; + int i, err; if (fs->ring_cookie != RX_CLS_FLOW_DISC && fs->ring_cookie >= dpaa2_eth_queue_count(priv)) @@ -607,11 +606,18 @@ static int do_cls_rule(struct net_device *net_dev, fs_act.options |= DPNI_FS_OPT_DISCARD; else fs_act.flow_id = fs->ring_cookie; - err = dpni_add_fs_entry(priv->mc_io, 0, priv->mc_token, 0, - fs->location, &rule_cfg, &fs_act); - } else { - err = dpni_remove_fs_entry(priv->mc_io, 0, priv->mc_token, 0, - &rule_cfg); + } + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + if (add) + err = dpni_add_fs_entry(priv->mc_io, 0, priv->mc_token, + i, fs->location, &rule_cfg, + &fs_act); + else + err = dpni_remove_fs_entry(priv->mc_io, 0, + priv->mc_token, i, + &rule_cfg); + if (err) + break; } dma_unmap_single(dev, key_iova, rule_cfg.key_size * 2, DMA_TO_DEVICE); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h index d9b6918807af..fd069f67be9b 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h @@ -59,6 +59,10 @@ #define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD(0x235) +#define DPNI_CMDID_SET_QOS_TBL DPNI_CMD(0x240) +#define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD(0x241) +#define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242) +#define DPNI_CMDID_CLR_QOS_TBL DPNI_CMD(0x243) #define DPNI_CMDID_ADD_FS_ENT DPNI_CMD(0x244) #define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245) #define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246) @@ -567,4 +571,59 @@ struct dpni_cmd_remove_fs_entry { __le64 mask_iova; }; +#define DPNI_DISCARD_ON_MISS_SHIFT 0 +#define DPNI_DISCARD_ON_MISS_SIZE 1 + +struct dpni_cmd_set_qos_table { + __le32 pad; + u8 default_tc; + /* only the LSB */ + u8 discard_on_miss; + __le16 pad1[21]; + __le64 key_cfg_iova; +}; + +struct dpni_cmd_add_qos_entry { + __le16 pad; + u8 tc_id; + u8 key_size; + __le16 index; + __le16 pad1; + __le64 key_iova; + __le64 mask_iova; +}; + +struct dpni_cmd_remove_qos_entry { + u8 pad[3]; + u8 key_size; + __le32 pad1; + __le64 key_iova; + __le64 mask_iova; +}; + +#define DPNI_DEST_TYPE_SHIFT 0 +#define DPNI_DEST_TYPE_SIZE 4 +#define DPNI_CONG_UNITS_SHIFT 4 +#define DPNI_CONG_UNITS_SIZE 2 + +struct dpni_cmd_set_congestion_notification { + /* cmd word 0 */ + u8 qtype; + u8 tc; + u8 pad[6]; + /* cmd word 1 */ + __le32 dest_id; + __le16 notification_mode; + u8 dest_priority; + /* from LSB: dest_type: 4 units:2 */ + u8 type_units; + /* cmd word 2 */ + __le64 message_iova; + /* cmd word 3 */ + __le64 message_ctx; + /* cmd word 4 */ + __le32 threshold_entry; + __le32 threshold_exit; +}; + #endif /* _FSL_DPNI_CMD_H */ diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.c b/drivers/net/ethernet/freescale/dpaa2/dpni.c index dd54e6953aeb..6b479ba66465 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpni.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpni.c @@ -1355,6 +1355,52 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io, } /** + * dpni_set_congestion_notification() - Set traffic class congestion + * notification configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @qtype: Type of queue - Rx, Tx and Tx confirm types are supported + * @tc_id: Traffic class selection (0-7) + * @cfg: Congestion notification configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_set_congestion_notification( + struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + enum dpni_queue_type qtype, + u8 tc_id, + const struct dpni_congestion_notification_cfg *cfg) +{ + struct dpni_cmd_set_congestion_notification *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = + mc_encode_cmd_header(DPNI_CMDID_SET_CONGESTION_NOTIFICATION, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_congestion_notification *)cmd.params; + cmd_params->qtype = qtype; + cmd_params->tc = tc_id; + cmd_params->dest_id = cpu_to_le32(cfg->dest_cfg.dest_id); + cmd_params->notification_mode = cpu_to_le16(cfg->notification_mode); + cmd_params->dest_priority = cfg->dest_cfg.priority; + dpni_set_field(cmd_params->type_units, DEST_TYPE, + cfg->dest_cfg.dest_type); + dpni_set_field(cmd_params->type_units, CONG_UNITS, cfg->units); + cmd_params->message_iova = cpu_to_le64(cfg->message_iova); + cmd_params->message_ctx = cpu_to_le64(cfg->message_ctx); + cmd_params->threshold_entry = cpu_to_le32(cfg->threshold_entry); + cmd_params->threshold_exit = cpu_to_le32(cfg->threshold_exit); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** * dpni_set_queue() - Set queue parameters * @mc_io: Pointer to MC portal's I/O object * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' @@ -1786,3 +1832,134 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io, /* send command to mc*/ return mc_send_command(mc_io, &cmd); } + +/** + * dpni_set_qos_table() - Set QoS mapping table + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: QoS table configuration + * + * This function and all QoS-related functions require that + *'max_tcs > 1' was set at DPNI creation. + * + * warning: Before calling this function, call dpkg_prepare_key_cfg() to + * prepare the key_cfg_iova parameter + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_set_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_qos_tbl_cfg *cfg) +{ + struct dpni_cmd_set_qos_table *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QOS_TBL, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_qos_table *)cmd.params; + cmd_params->default_tc = cfg->default_tc; + cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova); + dpni_set_field(cmd_params->discard_on_miss, DISCARD_ON_MISS, + cfg->discard_on_miss); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_add_qos_entry() - Add QoS mapping entry (to select a traffic class) + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: QoS rule to add + * @tc_id: Traffic class selection (0-7) + * @index: Location in the QoS table where to insert the entry. + * Only relevant if MASKING is enabled for QoS classification on + * this DPNI, it is ignored for exact match. + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_add_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg, + u8 tc_id, + u16 index) +{ + struct dpni_cmd_add_qos_entry *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_QOS_ENT, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_add_qos_entry *)cmd.params; + cmd_params->tc_id = tc_id; + cmd_params->key_size = cfg->key_size; + cmd_params->index = cpu_to_le16(index); + cmd_params->key_iova = cpu_to_le64(cfg->key_iova); + cmd_params->mask_iova = cpu_to_le64(cfg->mask_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_remove_qos_entry() - Remove QoS mapping entry + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: QoS rule to remove + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg) +{ + struct dpni_cmd_remove_qos_entry *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_QOS_ENT, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_remove_qos_entry *)cmd.params; + cmd_params->key_size = cfg->key_size; + cmd_params->key_iova = cpu_to_le64(cfg->key_iova); + cmd_params->mask_iova = cpu_to_le64(cfg->mask_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_clear_qos_table() - Clear all QoS mapping entries + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * + * Following this function call, all frames are directed to + * the default traffic class (0) + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_clear_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token) +{ + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_QOS_TBL, + cmd_flags, + token); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.h b/drivers/net/ethernet/freescale/dpaa2/dpni.h index ee0711d06b3a..e874d8084142 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpni.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpni.h @@ -514,6 +514,11 @@ int dpni_get_statistics(struct fsl_mc_io *mc_io, #define DPNI_LINK_OPT_ASYM_PAUSE 0x0000000000000008ULL /** + * Enable priority flow control pause frames + */ +#define DPNI_LINK_OPT_PFC_PAUSE 0x0000000000000010ULL + +/** * struct - Structure representing DPNI link configuration * @rate: Rate * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values @@ -716,6 +721,26 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, const struct dpni_rx_dist_cfg *cfg); /** + * struct dpni_qos_tbl_cfg - Structure representing QOS table configuration + * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with + * key extractions to be used as the QoS criteria by calling + * dpkg_prepare_key_cfg() + * @discard_on_miss: Set to '1' to discard frames in case of no match (miss); + * '0' to use the 'default_tc' in such cases + * @default_tc: Used in case of no-match and 'discard_on_miss'= 0 + */ +struct dpni_qos_tbl_cfg { + u64 key_cfg_iova; + int discard_on_miss; + u8 default_tc; +}; + +int dpni_set_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_qos_tbl_cfg *cfg); + +/** * enum dpni_dest - DPNI destination types * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and * does not generate FQDAN notifications; user is expected to @@ -858,6 +883,62 @@ enum dpni_congestion_point { }; /** + * struct dpni_dest_cfg - Structure representing DPNI destination parameters + * @dest_type: Destination type + * @dest_id: Either DPIO ID or DPCON ID, depending on the destination type + * @priority: Priority selection within the DPIO or DPCON channel; valid + * values are 0-1 or 0-7, depending on the number of priorities + * in that channel; not relevant for 'DPNI_DEST_NONE' option + */ +struct dpni_dest_cfg { + enum dpni_dest dest_type; + int dest_id; + u8 priority; +}; + +/* DPNI congestion options */ + +/** + * This congestion will trigger flow control or priority flow control. + * This will have effect only if flow control is enabled with + * dpni_set_link_cfg(). + */ +#define DPNI_CONG_OPT_FLOW_CONTROL 0x00000040 + +/** + * struct dpni_congestion_notification_cfg - congestion notification + * configuration + * @units: Units type + * @threshold_entry: Above this threshold we enter a congestion state. + * set it to '0' to disable it + * @threshold_exit: Below this threshold we exit the congestion state. + * @message_ctx: The context that will be part of the CSCN message + * @message_iova: I/O virtual address (must be in DMA-able memory), + * must be 16B aligned; valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' + * is contained in 'options' + * @dest_cfg: CSCN can be send to either DPIO or DPCON WQ channel + * @notification_mode: Mask of available options; use 'DPNI_CONG_OPT_<X>' values + */ + +struct dpni_congestion_notification_cfg { + enum dpni_congestion_unit units; + u32 threshold_entry; + u32 threshold_exit; + u64 message_ctx; + u64 message_iova; + struct dpni_dest_cfg dest_cfg; + u16 notification_mode; +}; + +int dpni_set_congestion_notification( + struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + enum dpni_queue_type qtype, + u8 tc_id, + const struct dpni_congestion_notification_cfg *cfg); + +/** * struct dpni_taildrop - Structure representing the taildrop * @enable: Indicates whether the taildrop is active or not. * @units: Indicates the unit of THRESHOLD. Queue taildrop only supports @@ -961,6 +1042,22 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io, u8 tc_id, const struct dpni_rule_cfg *cfg); +int dpni_add_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg, + u8 tc_id, + u16 index); + +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg); + +int dpni_clear_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token); + int dpni_get_api_version(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 *major_ver, diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 4acb91dce5fc..2d0d313ee7c5 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -1981,8 +1981,12 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable) return 0; failed_clk_ref: - if (fep->clk_ref) - clk_disable_unprepare(fep->clk_ref); + if (fep->clk_ptp) { + mutex_lock(&fep->ptp_clk_mutex); + clk_disable_unprepare(fep->clk_ptp); + fep->ptp_clk_on = false; + mutex_unlock(&fep->ptp_clk_mutex); + } failed_clk_ptp: if (fep->clk_enet_out) clk_disable_unprepare(fep->clk_enet_out); diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 3de549c6c693..197dc5b2c090 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -4678,12 +4678,10 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq, dev_err(dev, "Error %ld in VERSION_EXCHG_RSP\n", rc); break; } - dev_info(dev, "Partner protocol version is %d\n", - crq->version_exchange_rsp.version); - if (be16_to_cpu(crq->version_exchange_rsp.version) < - ibmvnic_version) - ibmvnic_version = + ibmvnic_version = be16_to_cpu(crq->version_exchange_rsp.version); + dev_info(dev, "Partner protocol version is %d\n", + ibmvnic_version); send_cap_queries(adapter); break; case QUERY_CAPABILITY_RSP: diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c index 7df35872c107..f1ace4fec19f 100644 --- a/drivers/net/ethernet/mediatek/mtk_star_emac.c +++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c @@ -413,8 +413,8 @@ static void mtk_star_dma_unmap_tx(struct mtk_star_priv *priv, static void mtk_star_nic_disable_pd(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_MAC_CFG, - MTK_STAR_BIT_MAC_CFG_NIC_PD, 0); + regmap_clear_bits(priv->regs, MTK_STAR_REG_MAC_CFG, + MTK_STAR_BIT_MAC_CFG_NIC_PD); } /* Unmask the three interrupts we care about, mask all others. */ @@ -434,41 +434,38 @@ static void mtk_star_intr_disable(struct mtk_star_priv *priv) static void mtk_star_intr_enable_tx(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_INT_MASK, - MTK_STAR_BIT_INT_STS_TNTC, 0); + regmap_clear_bits(priv->regs, MTK_STAR_REG_INT_MASK, + MTK_STAR_BIT_INT_STS_TNTC); } static void mtk_star_intr_enable_rx(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_INT_MASK, - MTK_STAR_BIT_INT_STS_FNRC, 0); + regmap_clear_bits(priv->regs, MTK_STAR_REG_INT_MASK, + MTK_STAR_BIT_INT_STS_FNRC); } static void mtk_star_intr_enable_stats(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_INT_MASK, - MTK_STAR_REG_INT_STS_MIB_CNT_TH, 0); + regmap_clear_bits(priv->regs, MTK_STAR_REG_INT_MASK, + MTK_STAR_REG_INT_STS_MIB_CNT_TH); } static void mtk_star_intr_disable_tx(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_INT_MASK, - MTK_STAR_BIT_INT_STS_TNTC, - MTK_STAR_BIT_INT_STS_TNTC); + regmap_set_bits(priv->regs, MTK_STAR_REG_INT_MASK, + MTK_STAR_BIT_INT_STS_TNTC); } static void mtk_star_intr_disable_rx(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_INT_MASK, - MTK_STAR_BIT_INT_STS_FNRC, - MTK_STAR_BIT_INT_STS_FNRC); + regmap_set_bits(priv->regs, MTK_STAR_REG_INT_MASK, + MTK_STAR_BIT_INT_STS_FNRC); } static void mtk_star_intr_disable_stats(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_INT_MASK, - MTK_STAR_REG_INT_STS_MIB_CNT_TH, - MTK_STAR_REG_INT_STS_MIB_CNT_TH); + regmap_set_bits(priv->regs, MTK_STAR_REG_INT_MASK, + MTK_STAR_REG_INT_STS_MIB_CNT_TH); } static unsigned int mtk_star_intr_read(struct mtk_star_priv *priv) @@ -524,12 +521,10 @@ static void mtk_star_dma_init(struct mtk_star_priv *priv) static void mtk_star_dma_start(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_TX_DMA_CTRL, - MTK_STAR_BIT_TX_DMA_CTRL_START, - MTK_STAR_BIT_TX_DMA_CTRL_START); - regmap_update_bits(priv->regs, MTK_STAR_REG_RX_DMA_CTRL, - MTK_STAR_BIT_RX_DMA_CTRL_START, - MTK_STAR_BIT_RX_DMA_CTRL_START); + regmap_set_bits(priv->regs, MTK_STAR_REG_TX_DMA_CTRL, + MTK_STAR_BIT_TX_DMA_CTRL_START); + regmap_set_bits(priv->regs, MTK_STAR_REG_RX_DMA_CTRL, + MTK_STAR_BIT_RX_DMA_CTRL_START); } static void mtk_star_dma_stop(struct mtk_star_priv *priv) @@ -553,16 +548,14 @@ static void mtk_star_dma_disable(struct mtk_star_priv *priv) static void mtk_star_dma_resume_rx(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_RX_DMA_CTRL, - MTK_STAR_BIT_RX_DMA_CTRL_RESUME, - MTK_STAR_BIT_RX_DMA_CTRL_RESUME); + regmap_set_bits(priv->regs, MTK_STAR_REG_RX_DMA_CTRL, + MTK_STAR_BIT_RX_DMA_CTRL_RESUME); } static void mtk_star_dma_resume_tx(struct mtk_star_priv *priv) { - regmap_update_bits(priv->regs, MTK_STAR_REG_TX_DMA_CTRL, - MTK_STAR_BIT_TX_DMA_CTRL_RESUME, - MTK_STAR_BIT_TX_DMA_CTRL_RESUME); + regmap_set_bits(priv->regs, MTK_STAR_REG_TX_DMA_CTRL, + MTK_STAR_BIT_TX_DMA_CTRL_RESUME); } static void mtk_star_set_mac_addr(struct net_device *ndev) @@ -842,8 +835,8 @@ static int mtk_star_hash_wait_ok(struct mtk_star_priv *priv) return ret; /* Check the BIST_OK bit. */ - regmap_read(priv->regs, MTK_STAR_REG_HASH_CTRL, &val); - if (!(val & MTK_STAR_BIT_HASH_CTRL_BIST_OK)) + if (!regmap_test_bits(priv->regs, MTK_STAR_REG_HASH_CTRL, + MTK_STAR_BIT_HASH_CTRL_BIST_OK)) return -EIO; return 0; @@ -877,12 +870,10 @@ static int mtk_star_reset_hash_table(struct mtk_star_priv *priv) if (ret) return ret; - regmap_update_bits(priv->regs, MTK_STAR_REG_HASH_CTRL, - MTK_STAR_BIT_HASH_CTRL_BIST_EN, - MTK_STAR_BIT_HASH_CTRL_BIST_EN); - regmap_update_bits(priv->regs, MTK_STAR_REG_TEST1, - MTK_STAR_BIT_TEST1_RST_HASH_MBIST, - MTK_STAR_BIT_TEST1_RST_HASH_MBIST); + regmap_set_bits(priv->regs, MTK_STAR_REG_HASH_CTRL, + MTK_STAR_BIT_HASH_CTRL_BIST_EN); + regmap_set_bits(priv->regs, MTK_STAR_REG_TEST1, + MTK_STAR_BIT_TEST1_RST_HASH_MBIST); return mtk_star_hash_wait_ok(priv); } @@ -1013,13 +1004,13 @@ static int mtk_star_enable(struct net_device *ndev) return ret; /* Setup the hashing algorithm */ - regmap_update_bits(priv->regs, MTK_STAR_REG_ARL_CFG, - MTK_STAR_BIT_ARL_CFG_HASH_ALG | - MTK_STAR_BIT_ARL_CFG_MISC_MODE, 0); + regmap_clear_bits(priv->regs, MTK_STAR_REG_ARL_CFG, + MTK_STAR_BIT_ARL_CFG_HASH_ALG | + MTK_STAR_BIT_ARL_CFG_MISC_MODE); /* Don't strip VLAN tags */ - regmap_update_bits(priv->regs, MTK_STAR_REG_MAC_CFG, - MTK_STAR_BIT_MAC_CFG_VLAN_STRIP, 0); + regmap_clear_bits(priv->regs, MTK_STAR_REG_MAC_CFG, + MTK_STAR_BIT_MAC_CFG_VLAN_STRIP); /* Setup DMA */ mtk_star_dma_init(priv); @@ -1201,9 +1192,8 @@ static void mtk_star_set_rx_mode(struct net_device *ndev) int ret; if (ndev->flags & IFF_PROMISC) { - regmap_update_bits(priv->regs, MTK_STAR_REG_ARL_CFG, - MTK_STAR_BIT_ARL_CFG_MISC_MODE, - MTK_STAR_BIT_ARL_CFG_MISC_MODE); + regmap_set_bits(priv->regs, MTK_STAR_REG_ARL_CFG, + MTK_STAR_BIT_ARL_CFG_MISC_MODE); } else if (netdev_mc_count(ndev) > MTK_STAR_HASHTABLE_MC_LIMIT || ndev->flags & IFF_ALLMULTI) { for (i = 0; i < MTK_STAR_HASHTABLE_SIZE_MAX; i++) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index c5f8a3925b16..842db20493df 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -995,10 +995,12 @@ void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv); void mlx5e_build_default_indir_rqt(u32 *indirection_rqt, int len, int num_channels); -void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, - u8 cq_period_mode); -void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, - u8 cq_period_mode); + +void mlx5e_reset_tx_moderation(struct mlx5e_params *params, u8 cq_period_mode); +void mlx5e_reset_rx_moderation(struct mlx5e_params *params, u8 cq_period_mode); +void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode); +void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode); + void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params); void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c index 2c4a670c8ffd..2a8950b3056f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c @@ -369,17 +369,19 @@ enum mlx5e_fec_supported_link_mode { *_policy = MLX5_GET(pplm_reg, _buf, fec_override_admin_##link); \ } while (0) -#define MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(buf, policy, write, link) \ - do { \ - u16 *__policy = &(policy); \ - bool _write = (write); \ - \ - if (_write && *__policy) \ - *__policy = find_first_bit((u_long *)__policy, \ - sizeof(u16) * BITS_PER_BYTE);\ - MLX5E_FEC_OVERRIDE_ADMIN_POLICY(buf, *__policy, _write, link); \ - if (!_write && *__policy) \ - *__policy = 1 << *__policy; \ +#define MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(buf, policy, write, link) \ + do { \ + unsigned long policy_long; \ + u16 *__policy = &(policy); \ + bool _write = (write); \ + \ + policy_long = *__policy; \ + if (_write && *__policy) \ + *__policy = find_first_bit(&policy_long, \ + sizeof(policy_long) * BITS_PER_BYTE);\ + MLX5E_FEC_OVERRIDE_ADMIN_POLICY(buf, *__policy, _write, link); \ + if (!_write && *__policy) \ + *__policy = 1 << *__policy; \ } while (0) /* get/set FEC admin field for a given speed */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c index c609a5e50ebc..80713123de5c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c @@ -306,20 +306,6 @@ mlx5e_rep_indr_block_priv_lookup(struct mlx5e_rep_priv *rpriv, return NULL; } -static void mlx5e_rep_indr_unregister_block(struct mlx5e_rep_priv *rpriv, - struct net_device *netdev); - -void mlx5e_rep_indr_clean_block_privs(struct mlx5e_rep_priv *rpriv) -{ - struct mlx5e_rep_indr_block_priv *cb_priv, *temp; - struct list_head *head = &rpriv->uplink_priv.tc_indr_block_priv_list; - - list_for_each_entry_safe(cb_priv, temp, head, list) { - mlx5e_rep_indr_unregister_block(rpriv, cb_priv->netdev); - kfree(cb_priv); - } -} - static int mlx5e_rep_indr_offload(struct net_device *netdev, struct flow_cls_offload *flower, @@ -423,9 +409,14 @@ mlx5e_rep_indr_setup_block(struct net_device *netdev, struct flow_block_offload *f, flow_setup_cb_t *setup_cb) { + struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); struct mlx5e_rep_indr_block_priv *indr_priv; struct flow_block_cb *block_cb; + if (!mlx5e_tc_tun_device_to_offload(priv, netdev) && + !(is_vlan_dev(netdev) && vlan_dev_real_dev(netdev) == rpriv->netdev)) + return -EOPNOTSUPP; + if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) return -EOPNOTSUPP; @@ -492,76 +483,20 @@ int mlx5e_rep_indr_setup_cb(struct net_device *netdev, void *cb_priv, } } -static int mlx5e_rep_indr_register_block(struct mlx5e_rep_priv *rpriv, - struct net_device *netdev) -{ - int err; - - err = __flow_indr_block_cb_register(netdev, rpriv, - mlx5e_rep_indr_setup_cb, - rpriv); - if (err) { - struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); - - mlx5_core_err(priv->mdev, "Failed to register remote block notifier for %s err=%d\n", - netdev_name(netdev), err); - } - return err; -} - -static void mlx5e_rep_indr_unregister_block(struct mlx5e_rep_priv *rpriv, - struct net_device *netdev) -{ - __flow_indr_block_cb_unregister(netdev, mlx5e_rep_indr_setup_cb, - rpriv); -} - -static int mlx5e_nic_rep_netdevice_event(struct notifier_block *nb, - unsigned long event, void *ptr) -{ - struct mlx5e_rep_priv *rpriv = container_of(nb, struct mlx5e_rep_priv, - uplink_priv.netdevice_nb); - struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); - struct net_device *netdev = netdev_notifier_info_to_dev(ptr); - - if (!mlx5e_tc_tun_device_to_offload(priv, netdev) && - !(is_vlan_dev(netdev) && vlan_dev_real_dev(netdev) == rpriv->netdev)) - return NOTIFY_OK; - - switch (event) { - case NETDEV_REGISTER: - mlx5e_rep_indr_register_block(rpriv, netdev); - break; - case NETDEV_UNREGISTER: - mlx5e_rep_indr_unregister_block(rpriv, netdev); - break; - } - return NOTIFY_OK; -} - int mlx5e_rep_tc_netdevice_event_register(struct mlx5e_rep_priv *rpriv) { struct mlx5_rep_uplink_priv *uplink_priv = &rpriv->uplink_priv; - int err; /* init indirect block notifications */ INIT_LIST_HEAD(&uplink_priv->tc_indr_block_priv_list); - uplink_priv->netdevice_nb.notifier_call = mlx5e_nic_rep_netdevice_event; - err = register_netdevice_notifier_dev_net(rpriv->netdev, - &uplink_priv->netdevice_nb, - &uplink_priv->netdevice_nn); - return err; + return flow_indr_dev_register(mlx5e_rep_indr_setup_cb, rpriv); } void mlx5e_rep_tc_netdevice_event_unregister(struct mlx5e_rep_priv *rpriv) { - struct mlx5_rep_uplink_priv *uplink_priv = &rpriv->uplink_priv; - - /* clean indirect TC block notifications */ - unregister_netdevice_notifier_dev_net(rpriv->netdev, - &uplink_priv->netdevice_nb, - &uplink_priv->netdevice_nn); + flow_indr_dev_unregister(mlx5e_rep_indr_setup_cb, rpriv, + mlx5e_rep_indr_setup_tc_cb); } #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.h index 86f92abf2fdd..fdf9702c2d7d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.h @@ -33,7 +33,6 @@ void mlx5e_rep_encap_entry_detach(struct mlx5e_priv *priv, int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data); -void mlx5e_rep_indr_clean_block_privs(struct mlx5e_rep_priv *rpriv); bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb, @@ -65,9 +64,6 @@ static inline int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { return -EOPNOTSUPP; } -static inline void -mlx5e_rep_indr_clean_block_privs(struct mlx5e_rep_priv *rpriv) {} - struct mlx5e_tc_update_priv; static inline bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index 0279bb7246e1..3ef2525e8de9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -527,8 +527,8 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, struct dim_cq_moder *rx_moder, *tx_moder; struct mlx5_core_dev *mdev = priv->mdev; struct mlx5e_channels new_channels = {}; + bool reset_rx, reset_tx; int err = 0; - bool reset; if (!MLX5_CAP_GEN(mdev, cq_moderation)) return -EOPNOTSUPP; @@ -566,15 +566,28 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, } /* we are opened */ - reset = (!!coal->use_adaptive_rx_coalesce != priv->channels.params.rx_dim_enabled) || - (!!coal->use_adaptive_tx_coalesce != priv->channels.params.tx_dim_enabled); + reset_rx = !!coal->use_adaptive_rx_coalesce != priv->channels.params.rx_dim_enabled; + reset_tx = !!coal->use_adaptive_tx_coalesce != priv->channels.params.tx_dim_enabled; - if (!reset) { + if (!reset_rx && !reset_tx) { mlx5e_set_priv_channels_coalesce(priv, coal); priv->channels.params = new_channels.params; goto out; } + if (reset_rx) { + u8 mode = MLX5E_GET_PFLAG(&new_channels.params, + MLX5E_PFLAG_RX_CQE_BASED_MODER); + + mlx5e_reset_rx_moderation(&new_channels.params, mode); + } + if (reset_tx) { + u8 mode = MLX5E_GET_PFLAG(&new_channels.params, + MLX5E_PFLAG_TX_CQE_BASED_MODER); + + mlx5e_reset_tx_moderation(&new_channels.params, mode); + } + err = mlx5e_safe_switch_channels(priv, &new_channels, NULL, NULL); out: @@ -665,11 +678,12 @@ static const u32 pplm_fec_2_ethtool_linkmodes[] = { static int get_fec_supported_advertised(struct mlx5_core_dev *dev, struct ethtool_link_ksettings *link_ksettings) { - u_long active_fec = 0; + unsigned long active_fec_long; + u32 active_fec; u32 bitn; int err; - err = mlx5e_get_fec_mode(dev, (u32 *)&active_fec, NULL); + err = mlx5e_get_fec_mode(dev, &active_fec, NULL); if (err) return (err == -EOPNOTSUPP) ? 0 : err; @@ -682,10 +696,11 @@ static int get_fec_supported_advertised(struct mlx5_core_dev *dev, MLX5E_ADVERTISE_SUPPORTED_FEC(MLX5E_FEC_LLRS_272_257_1, ETHTOOL_LINK_MODE_FEC_LLRS_BIT); + active_fec_long = active_fec; /* active fec is a bit set, find out which bit is set and * advertise the corresponding ethtool bit */ - bitn = find_first_bit(&active_fec, sizeof(u32) * BITS_PER_BYTE); + bitn = find_first_bit(&active_fec_long, sizeof(active_fec_long) * BITS_PER_BYTE); if (bitn < ARRAY_SIZE(pplm_fec_2_ethtool_linkmodes)) __set_bit(pplm_fec_2_ethtool_linkmodes[bitn], link_ksettings->link_modes.advertising); @@ -1517,8 +1532,8 @@ static int mlx5e_get_fecparam(struct net_device *netdev, { struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5_core_dev *mdev = priv->mdev; - u16 fec_configured = 0; - u32 fec_active = 0; + u16 fec_configured; + u32 fec_active; int err; err = mlx5e_get_fec_mode(mdev, &fec_active, &fec_configured); @@ -1526,14 +1541,14 @@ static int mlx5e_get_fecparam(struct net_device *netdev, if (err) return err; - fecparam->active_fec = pplm2ethtool_fec((u_long)fec_active, - sizeof(u32) * BITS_PER_BYTE); + fecparam->active_fec = pplm2ethtool_fec((unsigned long)fec_active, + sizeof(unsigned long) * BITS_PER_BYTE); if (!fecparam->active_fec) return -EOPNOTSUPP; - fecparam->fec = pplm2ethtool_fec((u_long)fec_configured, - sizeof(u16) * BITS_PER_BYTE); + fecparam->fec = pplm2ethtool_fec((unsigned long)fec_configured, + sizeof(unsigned long) * BITS_PER_BYTE); return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 2c36a04181b8..a836a02a2116 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4707,7 +4707,7 @@ static u8 mlx5_to_net_dim_cq_period_mode(u8 cq_period_mode) DIM_CQ_PERIOD_MODE_START_FROM_EQE; } -void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) +void mlx5e_reset_tx_moderation(struct mlx5e_params *params, u8 cq_period_mode) { if (params->tx_dim_enabled) { u8 dim_period_mode = mlx5_to_net_dim_cq_period_mode(cq_period_mode); @@ -4716,13 +4716,9 @@ void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) } else { params->tx_cq_moderation = mlx5e_get_def_tx_moderation(cq_period_mode); } - - MLX5E_SET_PFLAG(params, MLX5E_PFLAG_TX_CQE_BASED_MODER, - params->tx_cq_moderation.cq_period_mode == - MLX5_CQ_PERIOD_MODE_START_FROM_CQE); } -void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) +void mlx5e_reset_rx_moderation(struct mlx5e_params *params, u8 cq_period_mode) { if (params->rx_dim_enabled) { u8 dim_period_mode = mlx5_to_net_dim_cq_period_mode(cq_period_mode); @@ -4731,7 +4727,19 @@ void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) } else { params->rx_cq_moderation = mlx5e_get_def_rx_moderation(cq_period_mode); } +} + +void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) +{ + mlx5e_reset_tx_moderation(params, cq_period_mode); + MLX5E_SET_PFLAG(params, MLX5E_PFLAG_TX_CQE_BASED_MODER, + params->tx_cq_moderation.cq_period_mode == + MLX5_CQ_PERIOD_MODE_START_FROM_CQE); +} +void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) +{ + mlx5e_reset_rx_moderation(params, cq_period_mode); MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_CQE_BASED_MODER, params->rx_cq_moderation.cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index af89a4803c7d..006807e04eda 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -1018,7 +1018,6 @@ destroy_tises: static void mlx5e_cleanup_uplink_rep_tx(struct mlx5e_rep_priv *rpriv) { mlx5e_rep_tc_netdevice_event_unregister(rpriv); - mlx5e_rep_indr_clean_block_privs(rpriv); mlx5e_rep_bond_cleanup(rpriv); mlx5e_rep_tc_cleanup(rpriv); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h index da9f1686d525..1d5669801484 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h @@ -69,13 +69,8 @@ struct mlx5_rep_uplink_priv { * tc_indr_block_cb_priv_list is used to lookup indirect callback * private data * - * netdevice_nb is the netdev events notifier - used to register - * tunnel devices for block events - * */ struct list_head tc_indr_block_priv_list; - struct notifier_block netdevice_nb; - struct netdev_net_notifier netdevice_nn; struct mlx5_tun_entropy tun_entropy; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 3ce177c24d52..7fc84f58e28a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -2153,7 +2153,7 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev, flow_rule_match_meta(rule, &match); if (match.mask->ingress_ifindex != 0xFFFFFFFF) { NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask"); - return -EINVAL; + return -EOPNOTSUPP; } ingress_dev = __dev_get_by_index(dev_net(filter_dev), @@ -2161,13 +2161,13 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev, if (!ingress_dev) { NL_SET_ERR_MSG_MOD(extack, "Can't find the ingress port to match on"); - return -EINVAL; + return -ENOENT; } if (ingress_dev != filter_dev) { NL_SET_ERR_MSG_MOD(extack, "Can't match on the ingress filter port"); - return -EINVAL; + return -EOPNOTSUPP; } return 0; @@ -4162,10 +4162,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, if (!mlx5e_is_valid_eswitch_fwd_dev(priv, out_dev)) { NL_SET_ERR_MSG_MOD(extack, "devices are not on same switch HW, can't offload forwarding"); - netdev_warn(priv->netdev, - "devices %s %s not on same switch HW, can't offload forwarding\n", - priv->netdev->name, - out_dev->name); return -EOPNOTSUPP; } @@ -4950,7 +4946,7 @@ void mlx5e_tc_stats_matchall(struct mlx5e_priv *priv, dpkts = cur_stats.rx_packets - rpriv->prev_vf_vport_stats.rx_packets; dbytes = cur_stats.rx_bytes - rpriv->prev_vf_vport_stats.rx_bytes; rpriv->prev_vf_vport_stats = cur_stats; - flow_stats_update(&ma->stats, dpkts, dbytes, jiffies, + flow_stats_update(&ma->stats, dbytes, dpkts, jiffies, FLOW_ACTION_HW_STATS_DELAYED); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 09fe2d777550..df46b1fce3a7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1544,6 +1544,22 @@ static void shutdown(struct pci_dev *pdev) mlx5_pci_disable_device(dev); } +static int mlx5_suspend(struct pci_dev *pdev, pm_message_t state) +{ + struct mlx5_core_dev *dev = pci_get_drvdata(pdev); + + mlx5_unload_one(dev, false); + + return 0; +} + +static int mlx5_resume(struct pci_dev *pdev) +{ + struct mlx5_core_dev *dev = pci_get_drvdata(pdev); + + return mlx5_load_one(dev, false); +} + static const struct pci_device_id mlx5_core_pci_table[] = { { PCI_VDEVICE(MELLANOX, PCI_DEVICE_ID_MELLANOX_CONNECTIB) }, { PCI_VDEVICE(MELLANOX, 0x1012), MLX5_PCI_DEV_IS_VF}, /* Connect-IB VF */ @@ -1587,6 +1603,8 @@ static struct pci_driver mlx5_core_driver = { .id_table = mlx5_core_pci_table, .probe = init_one, .remove = remove_one, + .suspend = mlx5_suspend, + .resume = mlx5_resume, .shutdown = shutdown, .err_handler = &mlx5_err_handler, .sriov_configure = mlx5_core_sriov_configure, diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index 38fa7304af0c..fcb88d4271bf 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -5536,7 +5536,6 @@ enum mlxsw_reg_htgt_trap_group { MLXSW_REG_HTGT_TRAP_GROUP_SP_MULTICAST, MLXSW_REG_HTGT_TRAP_GROUP_SP_NEIGH_DISCOVERY, MLXSW_REG_HTGT_TRAP_GROUP_SP_ROUTER_EXP, - MLXSW_REG_HTGT_TRAP_GROUP_SP_REMOTE_ROUTE, MLXSW_REG_HTGT_TRAP_GROUP_SP_IP2ME, MLXSW_REG_HTGT_TRAP_GROUP_SP_DHCP, MLXSW_REG_HTGT_TRAP_GROUP_SP_EVENT, @@ -5552,6 +5551,7 @@ enum mlxsw_reg_htgt_trap_group { MLXSW_REG_HTGT_TRAP_GROUP_SP_DUMMY, MLXSW_REG_HTGT_TRAP_GROUP_SP_L2_DISCARDS, MLXSW_REG_HTGT_TRAP_GROUP_SP_L3_DISCARDS, + MLXSW_REG_HTGT_TRAP_GROUP_SP_L3_EXCEPTIONS, MLXSW_REG_HTGT_TRAP_GROUP_SP_TUNNEL_DISCARDS, MLXSW_REG_HTGT_TRAP_GROUP_SP_ACL_DISCARDS, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index c598ae9ed106..5ffa32b75e5f 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -3987,10 +3987,15 @@ static void mlxsw_sp_rx_listener_l3_mark_func(struct sk_buff *skb, return mlxsw_sp_rx_listener_no_mark_func(skb, local_port, priv); } -static void mlxsw_sp_rx_listener_sample_func(struct sk_buff *skb, u8 local_port, - void *priv) +void mlxsw_sp_ptp_receive(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb, + u8 local_port) +{ + mlxsw_sp->ptp_ops->receive(mlxsw_sp, skb, local_port); +} + +void mlxsw_sp_sample_receive(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb, + u8 local_port) { - struct mlxsw_sp *mlxsw_sp = priv; struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp->ports[local_port]; struct mlxsw_sp_port_sample *sample; u32 size; @@ -4014,14 +4019,6 @@ out: consume_skb(skb); } -static void mlxsw_sp_rx_listener_ptp(struct sk_buff *skb, u8 local_port, - void *priv) -{ - struct mlxsw_sp *mlxsw_sp = priv; - - mlxsw_sp->ptp_ops->receive(mlxsw_sp, skb, local_port); -} - #define MLXSW_SP_RXL_NO_MARK(_trap_id, _action, _trap_group, _is_ctrl) \ MLXSW_RXL(mlxsw_sp_rx_listener_no_mark_func, _trap_id, _action, \ _is_ctrl, SP_##_trap_group, DISCARD) @@ -4041,60 +4038,13 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = { /* Events */ MLXSW_SP_EVENTL(mlxsw_sp_pude_event_func, PUDE), /* L2 traps */ - MLXSW_SP_RXL_NO_MARK(STP, TRAP_TO_CPU, STP, true), - MLXSW_SP_RXL_NO_MARK(LACP, TRAP_TO_CPU, LACP, true), - MLXSW_RXL(mlxsw_sp_rx_listener_ptp, LLDP, TRAP_TO_CPU, - false, SP_LLDP, DISCARD), - MLXSW_SP_RXL_MARK(IGMP_QUERY, MIRROR_TO_CPU, MC_SNOOPING, false), - MLXSW_SP_RXL_NO_MARK(IGMP_V1_REPORT, TRAP_TO_CPU, MC_SNOOPING, false), - MLXSW_SP_RXL_NO_MARK(IGMP_V2_REPORT, TRAP_TO_CPU, MC_SNOOPING, false), - MLXSW_SP_RXL_NO_MARK(IGMP_V2_LEAVE, TRAP_TO_CPU, MC_SNOOPING, false), - MLXSW_SP_RXL_NO_MARK(IGMP_V3_REPORT, TRAP_TO_CPU, MC_SNOOPING, false), - MLXSW_SP_RXL_MARK(ARPBC, MIRROR_TO_CPU, NEIGH_DISCOVERY, false), - MLXSW_SP_RXL_MARK(ARPUC, MIRROR_TO_CPU, NEIGH_DISCOVERY, false), MLXSW_SP_RXL_NO_MARK(FID_MISS, TRAP_TO_CPU, FID_MISS, false), - MLXSW_SP_RXL_MARK(IPV6_MLDV12_LISTENER_QUERY, MIRROR_TO_CPU, - MC_SNOOPING, false), - MLXSW_SP_RXL_NO_MARK(IPV6_MLDV1_LISTENER_REPORT, TRAP_TO_CPU, - MC_SNOOPING, false), - MLXSW_SP_RXL_NO_MARK(IPV6_MLDV1_LISTENER_DONE, TRAP_TO_CPU, MC_SNOOPING, - false), - MLXSW_SP_RXL_NO_MARK(IPV6_MLDV2_LISTENER_REPORT, TRAP_TO_CPU, - MC_SNOOPING, false), /* L3 traps */ - MLXSW_SP_RXL_L3_MARK(LBERROR, MIRROR_TO_CPU, LBERROR, false), - MLXSW_SP_RXL_MARK(IP2ME, TRAP_TO_CPU, IP2ME, false), MLXSW_SP_RXL_MARK(IPV6_UNSPECIFIED_ADDRESS, TRAP_TO_CPU, ROUTER_EXP, false), - MLXSW_SP_RXL_MARK(IPV6_LINK_LOCAL_DEST, TRAP_TO_CPU, IP2ME, false), MLXSW_SP_RXL_MARK(IPV6_LINK_LOCAL_SRC, TRAP_TO_CPU, ROUTER_EXP, false), - MLXSW_SP_RXL_MARK(IPV6_ALL_NODES_LINK, TRAP_TO_CPU, IPV6, false), - MLXSW_SP_RXL_MARK(IPV6_ALL_ROUTERS_LINK, TRAP_TO_CPU, IPV6, - false), - MLXSW_SP_RXL_MARK(IPV4_OSPF, TRAP_TO_CPU, OSPF, false), - MLXSW_SP_RXL_MARK(IPV6_OSPF, TRAP_TO_CPU, OSPF, false), - MLXSW_SP_RXL_MARK(IPV4_DHCP, TRAP_TO_CPU, DHCP, false), - MLXSW_SP_RXL_MARK(IPV6_DHCP, TRAP_TO_CPU, DHCP, false), - MLXSW_SP_RXL_MARK(RTR_INGRESS0, TRAP_TO_CPU, REMOTE_ROUTE, false), - MLXSW_SP_RXL_MARK(IPV4_BGP, TRAP_TO_CPU, BGP, false), - MLXSW_SP_RXL_MARK(IPV6_BGP, TRAP_TO_CPU, BGP, false), - MLXSW_SP_RXL_MARK(L3_IPV6_ROUTER_SOLICITATION, TRAP_TO_CPU, IPV6, - false), - MLXSW_SP_RXL_MARK(L3_IPV6_ROUTER_ADVERTISEMENT, TRAP_TO_CPU, IPV6, - false), - MLXSW_SP_RXL_MARK(L3_IPV6_NEIGHBOR_SOLICITATION, TRAP_TO_CPU, - NEIGH_DISCOVERY, false), - MLXSW_SP_RXL_MARK(L3_IPV6_NEIGHBOR_ADVERTISEMENT, TRAP_TO_CPU, - NEIGH_DISCOVERY, false), - MLXSW_SP_RXL_MARK(L3_IPV6_REDIRECTION, TRAP_TO_CPU, IPV6, false), MLXSW_SP_RXL_MARK(IPV6_MC_LINK_LOCAL_DEST, TRAP_TO_CPU, ROUTER_EXP, false), - MLXSW_SP_RXL_MARK(ROUTER_ALERT_IPV4, TRAP_TO_CPU, IP2ME, false), - MLXSW_SP_RXL_MARK(ROUTER_ALERT_IPV6, TRAP_TO_CPU, IP2ME, false), - MLXSW_SP_RXL_MARK(IPV4_VRRP, TRAP_TO_CPU, VRRP, false), - MLXSW_SP_RXL_MARK(IPV6_VRRP, TRAP_TO_CPU, VRRP, false), - MLXSW_SP_RXL_MARK(IPV4_BFD, TRAP_TO_CPU, BFD, false), - MLXSW_SP_RXL_MARK(IPV6_BFD, TRAP_TO_CPU, BFD, false), MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_SIP_CLASS_E, FORWARD, ROUTER_EXP, false), MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_MC_DMAC, FORWARD, @@ -4103,24 +4053,11 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = { ROUTER_EXP, false), MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_DIP_LINK_LOCAL, FORWARD, ROUTER_EXP, false), - /* PKT Sample trap */ - MLXSW_RXL(mlxsw_sp_rx_listener_sample_func, PKT_SAMPLE, MIRROR_TO_CPU, - false, SP_PKT_SAMPLE, DISCARD), - /* ACL trap */ - MLXSW_SP_RXL_NO_MARK(ACL0, TRAP_TO_CPU, FLOW_LOGGING, false), /* Multicast Router Traps */ - MLXSW_SP_RXL_MARK(IPV4_PIM, TRAP_TO_CPU, PIM, false), - MLXSW_SP_RXL_MARK(IPV6_PIM, TRAP_TO_CPU, PIM, false), MLXSW_SP_RXL_MARK(ACL1, TRAP_TO_CPU, MULTICAST, false), MLXSW_SP_RXL_L3_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false), /* NVE traps */ MLXSW_SP_RXL_MARK(NVE_ENCAP_ARP, TRAP_TO_CPU, NEIGH_DISCOVERY, false), - MLXSW_SP_RXL_NO_MARK(NVE_DECAP_ARP, TRAP_TO_CPU, NEIGH_DISCOVERY, - false), - /* PTP traps */ - MLXSW_RXL(mlxsw_sp_rx_listener_ptp, PTP0, TRAP_TO_CPU, - false, SP_PTP0, DISCARD), - MLXSW_SP_RXL_NO_MARK(PTP1, TRAP_TO_CPU, PTP1, false), }; static const struct mlxsw_listener mlxsw_sp1_listener[] = { @@ -4149,48 +4086,12 @@ static int mlxsw_sp_cpu_policers_set(struct mlxsw_core *mlxsw_core) for (i = 0; i < max_cpu_policers; i++) { is_bytes = false; switch (i) { - case MLXSW_REG_HTGT_TRAP_GROUP_SP_STP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_LACP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_LLDP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_OSPF: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_PIM: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_DHCP: - rate = 128; - burst_size = 7; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_MC_SNOOPING: - rate = 16 * 1024; - burst_size = 10; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_BGP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_NEIGH_DISCOVERY: case MLXSW_REG_HTGT_TRAP_GROUP_SP_ROUTER_EXP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_REMOTE_ROUTE: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6: case MLXSW_REG_HTGT_TRAP_GROUP_SP_MULTICAST: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_FLOW_LOGGING: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_IP2ME: case MLXSW_REG_HTGT_TRAP_GROUP_SP_FID_MISS: rate = 1024; burst_size = 7; break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0: - rate = 24 * 1024; - burst_size = 12; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP1: - rate = 19 * 1024; - burst_size = 12; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_VRRP: - rate = 360; - burst_size = 7; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_BFD: - rate = 20 * 1024; - burst_size = 10; - break; default: continue; } @@ -4225,46 +4126,12 @@ static int mlxsw_sp_trap_groups_set(struct mlxsw_core *mlxsw_core) for (i = 0; i < max_trap_groups; i++) { policer_id = i; switch (i) { - case MLXSW_REG_HTGT_TRAP_GROUP_SP_STP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_LACP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_LLDP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_OSPF: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_PIM: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_VRRP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_BFD: - priority = 5; - tc = 5; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_FLOW_LOGGING: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_BGP: - priority = 4; - tc = 4; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_MC_SNOOPING: - priority = 3; - tc = 3; - break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_NEIGH_DISCOVERY: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_IP2ME: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP1: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_DHCP: - priority = 2; - tc = 2; - break; case MLXSW_REG_HTGT_TRAP_GROUP_SP_ROUTER_EXP: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_REMOTE_ROUTE: case MLXSW_REG_HTGT_TRAP_GROUP_SP_MULTICAST: case MLXSW_REG_HTGT_TRAP_GROUP_SP_FID_MISS: priority = 1; tc = 1; break; - case MLXSW_REG_HTGT_TRAP_GROUP_SP_PKT_SAMPLE: - case MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR: - priority = 0; - tc = 0; - break; case MLXSW_REG_HTGT_TRAP_GROUP_SP_EVENT: priority = MLXSW_REG_HTGT_DEFAULT_PRIORITY; tc = MLXSW_REG_HTGT_DEFAULT_TC; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 147a5634244b..6f96ca50c9ba 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -451,6 +451,10 @@ extern struct notifier_block mlxsw_sp_switchdev_notifier; /* spectrum.c */ void mlxsw_sp_rx_listener_no_mark_func(struct sk_buff *skb, u8 local_port, void *priv); +void mlxsw_sp_ptp_receive(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb, + u8 local_port); +void mlxsw_sp_sample_receive(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb, + u8 local_port); int mlxsw_sp_port_speed_get(struct mlxsw_sp_port *mlxsw_sp_port, u32 *speed); int mlxsw_sp_port_ets_set(struct mlxsw_sp_port *mlxsw_sp_port, enum mlxsw_reg_qeec_hr hr, u8 index, u8 next_index, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c index f4b812276a5a..157a42c63066 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c @@ -125,8 +125,8 @@ static void mlxsw_sp_rx_acl_drop_listener(struct sk_buff *skb, u8 local_port, consume_skb(skb); } -static void mlxsw_sp_rx_exception_listener(struct sk_buff *skb, u8 local_port, - void *trap_ctx) +static int __mlxsw_sp_rx_no_mark_listener(struct sk_buff *skb, u8 local_port, + void *trap_ctx) { struct devlink_port *in_devlink_port; struct mlxsw_sp_port *mlxsw_sp_port; @@ -139,7 +139,7 @@ static void mlxsw_sp_rx_exception_listener(struct sk_buff *skb, u8 local_port, err = mlxsw_sp_rx_listener(mlxsw_sp, skb, local_port, mlxsw_sp_port); if (err) - return; + return err; devlink = priv_to_devlink(mlxsw_sp->core); in_devlink_port = mlxsw_core_port_devlink_port_get(mlxsw_sp->core, @@ -147,10 +147,71 @@ static void mlxsw_sp_rx_exception_listener(struct sk_buff *skb, u8 local_port, skb_push(skb, ETH_HLEN); devlink_trap_report(devlink, skb, trap_ctx, in_devlink_port, NULL); skb_pull(skb, ETH_HLEN); - skb->offload_fwd_mark = 1; + + return 0; +} + +static void mlxsw_sp_rx_no_mark_listener(struct sk_buff *skb, u8 local_port, + void *trap_ctx) +{ + int err; + + err = __mlxsw_sp_rx_no_mark_listener(skb, local_port, trap_ctx); + if (err) + return; + netif_receive_skb(skb); } +static void mlxsw_sp_rx_mark_listener(struct sk_buff *skb, u8 local_port, + void *trap_ctx) +{ + skb->offload_fwd_mark = 1; + mlxsw_sp_rx_no_mark_listener(skb, local_port, trap_ctx); +} + +static void mlxsw_sp_rx_l3_mark_listener(struct sk_buff *skb, u8 local_port, + void *trap_ctx) +{ + skb->offload_l3_fwd_mark = 1; + skb->offload_fwd_mark = 1; + mlxsw_sp_rx_no_mark_listener(skb, local_port, trap_ctx); +} + +static void mlxsw_sp_rx_ptp_listener(struct sk_buff *skb, u8 local_port, + void *trap_ctx) +{ + struct mlxsw_sp *mlxsw_sp = devlink_trap_ctx_priv(trap_ctx); + int err; + + err = __mlxsw_sp_rx_no_mark_listener(skb, local_port, trap_ctx); + if (err) + return; + + /* The PTP handler expects skb->data to point to the start of the + * Ethernet header. + */ + skb_push(skb, ETH_HLEN); + mlxsw_sp_ptp_receive(mlxsw_sp, skb, local_port); +} + +static void mlxsw_sp_rx_sample_listener(struct sk_buff *skb, u8 local_port, + void *trap_ctx) +{ + struct mlxsw_sp *mlxsw_sp = devlink_trap_ctx_priv(trap_ctx); + int err; + + err = __mlxsw_sp_rx_no_mark_listener(skb, local_port, trap_ctx); + if (err) + return; + + /* The sample handler expects skb->data to point to the start of the + * Ethernet header. + */ + skb_push(skb, ETH_HLEN); + mlxsw_sp_sample_receive(mlxsw_sp, skb, local_port); +} + #define MLXSW_SP_TRAP_DROP(_id, _group_id) \ DEVLINK_TRAP_GENERIC(DROP, DROP, _id, \ DEVLINK_TRAP_GROUP_GENERIC_ID_##_group_id, \ @@ -172,6 +233,11 @@ static void mlxsw_sp_rx_exception_listener(struct sk_buff *skb, u8 local_port, DEVLINK_TRAP_GROUP_GENERIC_ID_##_group_id, \ MLXSW_SP_TRAP_METADATA) +#define MLXSW_SP_TRAP_CONTROL(_id, _group_id, _action) \ + DEVLINK_TRAP_GENERIC(CONTROL, _action, _id, \ + DEVLINK_TRAP_GROUP_GENERIC_ID_##_group_id, \ + MLXSW_SP_TRAP_METADATA) + #define MLXSW_SP_RXL_DISCARD(_id, _group_id) \ MLXSW_RXL_DIS(mlxsw_sp_rx_drop_listener, DISCARD_##_id, \ TRAP_EXCEPTION_TO_CPU, false, SP_##_group_id, \ @@ -183,9 +249,21 @@ static void mlxsw_sp_rx_exception_listener(struct sk_buff *skb, u8 local_port, SET_FW_DEFAULT, SP_##_dis_group_id) #define MLXSW_SP_RXL_EXCEPTION(_id, _group_id, _action) \ - MLXSW_RXL(mlxsw_sp_rx_exception_listener, _id, \ + MLXSW_RXL(mlxsw_sp_rx_mark_listener, _id, \ _action, false, SP_##_group_id, SET_FW_DEFAULT) +#define MLXSW_SP_RXL_NO_MARK(_id, _group_id, _action, _is_ctrl) \ + MLXSW_RXL(mlxsw_sp_rx_no_mark_listener, _id, _action, \ + _is_ctrl, SP_##_group_id, DISCARD) + +#define MLXSW_SP_RXL_MARK(_id, _group_id, _action, _is_ctrl) \ + MLXSW_RXL(mlxsw_sp_rx_mark_listener, _id, _action, _is_ctrl, \ + SP_##_group_id, DISCARD) + +#define MLXSW_SP_RXL_L3_MARK(_id, _group_id, _action, _is_ctrl) \ + MLXSW_RXL(mlxsw_sp_rx_l3_mark_listener, _id, _action, _is_ctrl, \ + SP_##_group_id, DISCARD) + #define MLXSW_SP_TRAP_POLICER(_id, _rate, _burst) \ DEVLINK_TRAP_POLICER(_id, _rate, _burst, \ MLXSW_REG_QPCR_HIGHEST_CIR, \ @@ -199,6 +277,57 @@ mlxsw_sp_trap_policer_items_arr[] = { { .policer = MLXSW_SP_TRAP_POLICER(1, 10 * 1024, 128), }, + { + .policer = MLXSW_SP_TRAP_POLICER(2, 128, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(3, 128, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(4, 128, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(5, 16 * 1024, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(6, 128, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(7, 1024, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(8, 20 * 1024, 1024), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(9, 128, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(10, 1024, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(11, 360, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(12, 128, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(13, 128, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(14, 1024, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(15, 1024, 128), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(16, 24 * 1024, 4096), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(17, 19 * 1024, 4096), + }, + { + .policer = MLXSW_SP_TRAP_POLICER(18, 1024, 128), + }, }; static const struct mlxsw_sp_trap_group_item mlxsw_sp_trap_group_items_arr[] = { @@ -213,6 +342,11 @@ static const struct mlxsw_sp_trap_group_item mlxsw_sp_trap_group_items_arr[] = { .priority = 0, }, { + .group = DEVLINK_TRAP_GROUP_GENERIC(L3_EXCEPTIONS, 1), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_L3_EXCEPTIONS, + .priority = 2, + }, + { .group = DEVLINK_TRAP_GROUP_GENERIC(TUNNEL_DROPS, 1), .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_TUNNEL_DISCARDS, .priority = 0, @@ -222,6 +356,96 @@ static const struct mlxsw_sp_trap_group_item mlxsw_sp_trap_group_items_arr[] = { .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_ACL_DISCARDS, .priority = 0, }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(STP, 2), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_STP, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(LACP, 3), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_LACP, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(LLDP, 4), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_LLDP, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(MC_SNOOPING, 5), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_MC_SNOOPING, + .priority = 3, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(DHCP, 6), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_DHCP, + .priority = 2, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(NEIGH_DISCOVERY, 7), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_NEIGH_DISCOVERY, + .priority = 2, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(BFD, 8), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_BFD, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(OSPF, 9), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_OSPF, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(BGP, 10), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_BGP, + .priority = 4, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(VRRP, 11), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_VRRP, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(PIM, 12), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_PIM, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(UC_LB, 13), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR, + .priority = 0, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(LOCAL_DELIVERY, 14), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_IP2ME, + .priority = 2, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(IPV6, 15), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6, + .priority = 2, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(PTP_EVENT, 16), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0, + .priority = 5, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(PTP_GENERAL, 17), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP1, + .priority = 2, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(ACL_SAMPLE, 0), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_PKT_SAMPLE, + .priority = 0, + }, + { + .group = DEVLINK_TRAP_GROUP_GENERIC(ACL_TRAP, 18), + .hw_group_id = MLXSW_REG_HTGT_TRAP_GROUP_SP_FLOW_LOGGING, + .priority = 4, + }, }; static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = { @@ -332,56 +556,59 @@ static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = { }, }, { - .trap = MLXSW_SP_TRAP_EXCEPTION(MTU_ERROR, L3_DROPS), + .trap = MLXSW_SP_TRAP_EXCEPTION(MTU_ERROR, L3_EXCEPTIONS), .listeners_arr = { - MLXSW_SP_RXL_EXCEPTION(MTUERROR, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(MTUERROR, L3_EXCEPTIONS, TRAP_TO_CPU), }, }, { - .trap = MLXSW_SP_TRAP_EXCEPTION(TTL_ERROR, L3_DROPS), + .trap = MLXSW_SP_TRAP_EXCEPTION(TTL_ERROR, L3_EXCEPTIONS), .listeners_arr = { - MLXSW_SP_RXL_EXCEPTION(TTLERROR, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(TTLERROR, L3_EXCEPTIONS, TRAP_TO_CPU), }, }, { - .trap = MLXSW_SP_TRAP_EXCEPTION(RPF, L3_DROPS), + .trap = MLXSW_SP_TRAP_EXCEPTION(RPF, L3_EXCEPTIONS), .listeners_arr = { - MLXSW_SP_RXL_EXCEPTION(RPF, L3_DISCARDS, TRAP_TO_CPU), + MLXSW_SP_RXL_EXCEPTION(RPF, L3_EXCEPTIONS, TRAP_TO_CPU), }, }, { - .trap = MLXSW_SP_TRAP_EXCEPTION(REJECT_ROUTE, L3_DROPS), + .trap = MLXSW_SP_TRAP_EXCEPTION(REJECT_ROUTE, L3_EXCEPTIONS), .listeners_arr = { - MLXSW_SP_RXL_EXCEPTION(RTR_INGRESS1, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(RTR_INGRESS1, L3_EXCEPTIONS, TRAP_TO_CPU), }, }, { - .trap = MLXSW_SP_TRAP_EXCEPTION(UNRESOLVED_NEIGH, L3_DROPS), + .trap = MLXSW_SP_TRAP_EXCEPTION(UNRESOLVED_NEIGH, + L3_EXCEPTIONS), .listeners_arr = { - MLXSW_SP_RXL_EXCEPTION(HOST_MISS_IPV4, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(HOST_MISS_IPV4, L3_EXCEPTIONS, TRAP_TO_CPU), - MLXSW_SP_RXL_EXCEPTION(HOST_MISS_IPV6, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(HOST_MISS_IPV6, L3_EXCEPTIONS, TRAP_TO_CPU), - MLXSW_SP_RXL_EXCEPTION(DISCARD_ROUTER3, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(DISCARD_ROUTER3, L3_EXCEPTIONS, TRAP_EXCEPTION_TO_CPU), }, }, { .trap = MLXSW_SP_TRAP_EXCEPTION(IPV4_LPM_UNICAST_MISS, - L3_DROPS), + L3_EXCEPTIONS), .listeners_arr = { - MLXSW_SP_RXL_EXCEPTION(DISCARD_ROUTER_LPM4, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(DISCARD_ROUTER_LPM4, + L3_EXCEPTIONS, TRAP_EXCEPTION_TO_CPU), }, }, { .trap = MLXSW_SP_TRAP_EXCEPTION(IPV6_LPM_UNICAST_MISS, - L3_DROPS), + L3_EXCEPTIONS), .listeners_arr = { - MLXSW_SP_RXL_EXCEPTION(DISCARD_ROUTER_LPM6, L3_DISCARDS, + MLXSW_SP_RXL_EXCEPTION(DISCARD_ROUTER_LPM6, + L3_EXCEPTIONS, TRAP_EXCEPTION_TO_CPU), }, }, @@ -439,6 +666,320 @@ static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = { DUMMY), }, }, + { + .trap = MLXSW_SP_TRAP_CONTROL(STP, STP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(STP, STP, TRAP_TO_CPU, true), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(LACP, LACP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(LACP, LACP, TRAP_TO_CPU, true), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(LLDP, LLDP, TRAP), + .listeners_arr = { + MLXSW_RXL(mlxsw_sp_rx_ptp_listener, LLDP, TRAP_TO_CPU, + false, SP_LLDP, DISCARD), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IGMP_QUERY, MC_SNOOPING, MIRROR), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IGMP_QUERY, MC_SNOOPING, + MIRROR_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IGMP_V1_REPORT, MC_SNOOPING, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(IGMP_V1_REPORT, MC_SNOOPING, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IGMP_V2_REPORT, MC_SNOOPING, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(IGMP_V2_REPORT, MC_SNOOPING, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IGMP_V3_REPORT, MC_SNOOPING, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(IGMP_V3_REPORT, MC_SNOOPING, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IGMP_V2_LEAVE, MC_SNOOPING, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(IGMP_V2_LEAVE, MC_SNOOPING, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(MLD_QUERY, MC_SNOOPING, MIRROR), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_MLDV12_LISTENER_QUERY, + MC_SNOOPING, MIRROR_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(MLD_V1_REPORT, MC_SNOOPING, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(IPV6_MLDV1_LISTENER_REPORT, + MC_SNOOPING, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(MLD_V2_REPORT, MC_SNOOPING, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(IPV6_MLDV2_LISTENER_REPORT, + MC_SNOOPING, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(MLD_V1_DONE, MC_SNOOPING, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(IPV6_MLDV1_LISTENER_DONE, + MC_SNOOPING, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV4_DHCP, DHCP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV4_DHCP, DHCP, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_DHCP, DHCP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_DHCP, DHCP, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(ARP_REQUEST, NEIGH_DISCOVERY, + MIRROR), + .listeners_arr = { + MLXSW_SP_RXL_MARK(ARPBC, NEIGH_DISCOVERY, MIRROR_TO_CPU, + false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(ARP_RESPONSE, NEIGH_DISCOVERY, + MIRROR), + .listeners_arr = { + MLXSW_SP_RXL_MARK(ARPUC, NEIGH_DISCOVERY, MIRROR_TO_CPU, + false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(ARP_OVERLAY, NEIGH_DISCOVERY, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(NVE_DECAP_ARP, NEIGH_DISCOVERY, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_NEIGH_SOLICIT, + NEIGH_DISCOVERY, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(L3_IPV6_NEIGHBOR_SOLICITATION, + NEIGH_DISCOVERY, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_NEIGH_ADVERT, + NEIGH_DISCOVERY, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(L3_IPV6_NEIGHBOR_ADVERTISEMENT, + NEIGH_DISCOVERY, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV4_BFD, BFD, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV4_BFD, BFD, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_BFD, BFD, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_BFD, BFD, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV4_OSPF, OSPF, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV4_OSPF, OSPF, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_OSPF, OSPF, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_OSPF, OSPF, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV4_BGP, BGP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV4_BGP, BGP, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_BGP, BGP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_BGP, BGP, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV4_VRRP, VRRP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV4_VRRP, VRRP, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_VRRP, VRRP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_VRRP, VRRP, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV4_PIM, PIM, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV4_PIM, PIM, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_PIM, PIM, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_PIM, PIM, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(UC_LB, UC_LB, MIRROR), + .listeners_arr = { + MLXSW_SP_RXL_L3_MARK(LBERROR, LBERROR, MIRROR_TO_CPU, + false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(LOCAL_ROUTE, LOCAL_DELIVERY, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IP2ME, IP2ME, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(EXTERNAL_ROUTE, LOCAL_DELIVERY, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(RTR_INGRESS0, IP2ME, TRAP_TO_CPU, + false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_UC_DIP_LINK_LOCAL_SCOPE, + LOCAL_DELIVERY, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_LINK_LOCAL_DEST, IP2ME, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV4_ROUTER_ALERT, LOCAL_DELIVERY, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(ROUTER_ALERT_IPV4, IP2ME, TRAP_TO_CPU, + false), + }, + }, + { + /* IPV6_ROUTER_ALERT is defined in uAPI as 22, but it is not + * used in this file, so undefine it. + */ + #undef IPV6_ROUTER_ALERT + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_ROUTER_ALERT, LOCAL_DELIVERY, + TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(ROUTER_ALERT_IPV6, IP2ME, TRAP_TO_CPU, + false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_DIP_ALL_NODES, IPV6, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_ALL_NODES_LINK, IPV6, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_DIP_ALL_ROUTERS, IPV6, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(IPV6_ALL_ROUTERS_LINK, IPV6, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_ROUTER_SOLICIT, IPV6, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(L3_IPV6_ROUTER_SOLICITATION, IPV6, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_ROUTER_ADVERT, IPV6, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(L3_IPV6_ROUTER_ADVERTISEMENT, IPV6, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(IPV6_REDIRECT, IPV6, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_MARK(L3_IPV6_REDIRECTION, IPV6, + TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(PTP_EVENT, PTP_EVENT, TRAP), + .listeners_arr = { + MLXSW_RXL(mlxsw_sp_rx_ptp_listener, PTP0, TRAP_TO_CPU, + false, SP_PTP0, DISCARD), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(PTP_GENERAL, PTP_GENERAL, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(PTP1, PTP1, TRAP_TO_CPU, false), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(FLOW_ACTION_SAMPLE, ACL_SAMPLE, + MIRROR), + .listeners_arr = { + MLXSW_RXL(mlxsw_sp_rx_sample_listener, PKT_SAMPLE, + MIRROR_TO_CPU, false, SP_PKT_SAMPLE, DISCARD), + }, + }, + { + .trap = MLXSW_SP_TRAP_CONTROL(FLOW_ACTION_TRAP, ACL_TRAP, TRAP), + .listeners_arr = { + MLXSW_SP_RXL_NO_MARK(ACL0, FLOW_LOGGING, TRAP_TO_CPU, + false), + }, + }, }; static struct mlxsw_sp_trap_policer_item * diff --git a/drivers/net/ethernet/microchip/lan743x_ethtool.c b/drivers/net/ethernet/microchip/lan743x_ethtool.c index 3a0b289d9771..c533d06fbe3a 100644 --- a/drivers/net/ethernet/microchip/lan743x_ethtool.c +++ b/drivers/net/ethernet/microchip/lan743x_ethtool.c @@ -2,11 +2,11 @@ /* Copyright (C) 2018 Microchip Technology Inc. */ #include <linux/netdevice.h> -#include "lan743x_main.h" -#include "lan743x_ethtool.h" #include <linux/net_tstamp.h> #include <linux/pci.h> #include <linux/phy.h> +#include "lan743x_main.h" +#include "lan743x_ethtool.h" /* eeprom */ #define LAN743X_EEPROM_MAGIC (0x74A5) diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c index a43140f7b5eb..36624e3c633b 100644 --- a/drivers/net/ethernet/microchip/lan743x_main.c +++ b/drivers/net/ethernet/microchip/lan743x_main.c @@ -8,7 +8,10 @@ #include <linux/crc32.h> #include <linux/microchipphy.h> #include <linux/net_tstamp.h> +#include <linux/of_mdio.h> +#include <linux/of_net.h> #include <linux/phy.h> +#include <linux/phy_fixed.h> #include <linux/rtnetlink.h> #include <linux/iopoll.h> #include <linux/crc16.h> @@ -798,9 +801,9 @@ static int lan743x_mac_init(struct lan743x_adapter *adapter) netdev = adapter->netdev; - /* setup auto duplex, and speed detection */ + /* disable auto duplex, and speed detection. Phylib does that */ data = lan743x_csr_read(adapter, MAC_CR); - data |= MAC_CR_ADD_ | MAC_CR_ASD_; + data &= ~(MAC_CR_ADD_ | MAC_CR_ASD_); data |= MAC_CR_CNTR_RST_; lan743x_csr_write(adapter, MAC_CR, data); @@ -946,6 +949,7 @@ static void lan743x_phy_link_status_change(struct net_device *netdev) { struct lan743x_adapter *adapter = netdev_priv(netdev); struct phy_device *phydev = netdev->phydev; + u32 data; phy_print_status(phydev); if (phydev->state == PHY_RUNNING) { @@ -953,6 +957,39 @@ static void lan743x_phy_link_status_change(struct net_device *netdev) int remote_advertisement = 0; int local_advertisement = 0; + data = lan743x_csr_read(adapter, MAC_CR); + + /* set interface mode */ + if (phy_interface_mode_is_rgmii(adapter->phy_mode)) + /* RGMII */ + data &= ~MAC_CR_MII_EN_; + else + /* GMII */ + data |= MAC_CR_MII_EN_; + + /* set duplex mode */ + if (phydev->duplex) + data |= MAC_CR_DPX_; + else + data &= ~MAC_CR_DPX_; + + /* set bus speed */ + switch (phydev->speed) { + case SPEED_10: + data &= ~MAC_CR_CFG_H_; + data &= ~MAC_CR_CFG_L_; + break; + case SPEED_100: + data &= ~MAC_CR_CFG_H_; + data |= MAC_CR_CFG_L_; + break; + case SPEED_1000: + data |= MAC_CR_CFG_H_; + data |= MAC_CR_CFG_L_; + break; + } + lan743x_csr_write(adapter, MAC_CR, data); + memset(&ksettings, 0, sizeof(ksettings)); phy_ethtool_get_link_ksettings(netdev, &ksettings); local_advertisement = @@ -980,20 +1017,44 @@ static void lan743x_phy_close(struct lan743x_adapter *adapter) static int lan743x_phy_open(struct lan743x_adapter *adapter) { struct lan743x_phy *phy = &adapter->phy; + struct device_node *phynode; struct phy_device *phydev; struct net_device *netdev; int ret = -EIO; netdev = adapter->netdev; - phydev = phy_find_first(adapter->mdiobus); - if (!phydev) - goto return_error; + phynode = of_node_get(adapter->pdev->dev.of_node); + adapter->phy_mode = PHY_INTERFACE_MODE_GMII; + + if (phynode) { + of_get_phy_mode(phynode, &adapter->phy_mode); + + if (of_phy_is_fixed_link(phynode)) { + ret = of_phy_register_fixed_link(phynode); + if (ret) { + netdev_err(netdev, + "cannot register fixed PHY\n"); + of_node_put(phynode); + goto return_error; + } + } + phydev = of_phy_connect(netdev, phynode, + lan743x_phy_link_status_change, 0, + adapter->phy_mode); + of_node_put(phynode); + if (!phydev) + goto return_error; + } else { + phydev = phy_find_first(adapter->mdiobus); + if (!phydev) + goto return_error; - ret = phy_connect_direct(netdev, phydev, - lan743x_phy_link_status_change, - PHY_INTERFACE_MODE_GMII); - if (ret) - goto return_error; + ret = phy_connect_direct(netdev, phydev, + lan743x_phy_link_status_change, + adapter->phy_mode); + if (ret) + goto return_error; + } /* MAC doesn't support 1000T Half */ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT); diff --git a/drivers/net/ethernet/microchip/lan743x_main.h b/drivers/net/ethernet/microchip/lan743x_main.h index 3b02eeae5f45..c61a40411317 100644 --- a/drivers/net/ethernet/microchip/lan743x_main.h +++ b/drivers/net/ethernet/microchip/lan743x_main.h @@ -4,6 +4,7 @@ #ifndef _LAN743X_H #define _LAN743X_H +#include <linux/phy.h> #include "lan743x_ptp.h" #define DRIVER_AUTHOR "Bryan Whitehead <Bryan.Whitehead@microchip.com>" @@ -104,10 +105,14 @@ ((value << 0) & FCT_FLOW_CTL_ON_THRESHOLD_) #define MAC_CR (0x100) +#define MAC_CR_MII_EN_ BIT(19) #define MAC_CR_EEE_EN_ BIT(17) #define MAC_CR_ADD_ BIT(12) #define MAC_CR_ASD_ BIT(11) #define MAC_CR_CNTR_RST_ BIT(5) +#define MAC_CR_DPX_ BIT(3) +#define MAC_CR_CFG_H_ BIT(2) +#define MAC_CR_CFG_L_ BIT(1) #define MAC_CR_RST_ BIT(0) #define MAC_RX (0x104) @@ -698,6 +703,7 @@ struct lan743x_rx { struct lan743x_adapter { struct net_device *netdev; struct mii_bus *mdiobus; + phy_interface_t phy_mode; int msg_enable; #ifdef CONFIG_PM u32 wolopts; diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.c b/drivers/net/ethernet/microchip/lan743x_ptp.c index 9399f6a98748..ab6d719d40f0 100644 --- a/drivers/net/ethernet/microchip/lan743x_ptp.c +++ b/drivers/net/ethernet/microchip/lan743x_ptp.c @@ -2,12 +2,12 @@ /* Copyright (C) 2018 Microchip Technology Inc. */ #include <linux/netdevice.h> -#include "lan743x_main.h" #include <linux/ptp_clock_kernel.h> #include <linux/module.h> #include <linux/pci.h> #include <linux/net_tstamp.h> +#include "lan743x_main.h" #include "lan743x_ptp.h" diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c index ca7032d22196..c39327677a7d 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.c +++ b/drivers/net/ethernet/netronome/nfp/flower/main.c @@ -830,6 +830,10 @@ static int nfp_flower_init(struct nfp_app *app) if (err) goto err_cleanup; + err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app); + if (err) + goto err_cleanup; + if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM) nfp_flower_qos_init(app); @@ -856,6 +860,9 @@ static void nfp_flower_clean(struct nfp_app *app) skb_queue_purge(&app_priv->cmsg_skbs_low); flush_work(&app_priv->cmsg_work); + flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app, + nfp_flower_setup_indr_block_cb); + if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM) nfp_flower_qos_cleanup(app); @@ -959,10 +966,6 @@ nfp_flower_netdev_event(struct nfp_app *app, struct net_device *netdev, return ret; } - ret = nfp_flower_reg_indir_block_handler(app, netdev, event); - if (ret & NOTIFY_STOP_MASK) - return ret; - ret = nfp_flower_internal_port_event_handler(app, netdev, event); if (ret & NOTIFY_STOP_MASK) return ret; diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h index 59abea2a39ad..6c3dc3baf387 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.h +++ b/drivers/net/ethernet/netronome/nfp/flower/main.h @@ -458,9 +458,10 @@ void nfp_flower_qos_cleanup(struct nfp_app *app); int nfp_flower_setup_qos_offload(struct nfp_app *app, struct net_device *netdev, struct tc_cls_matchall_offload *flow); void nfp_flower_stats_rlim_reply(struct nfp_app *app, struct sk_buff *skb); -int nfp_flower_reg_indir_block_handler(struct nfp_app *app, - struct net_device *netdev, - unsigned long event); +int nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, + enum tc_setup_type type, void *type_data); +int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv); void __nfp_flower_non_repr_priv_get(struct nfp_flower_non_repr_priv *non_repr_priv); diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c index c694dbc239d0..695d24b9dd92 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/offload.c +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c @@ -1440,7 +1440,8 @@ __nfp_flower_update_merge_stats(struct nfp_app *app, ctx_id = be32_to_cpu(sub_flow->meta.host_ctx_id); priv->stats[ctx_id].pkts += pkts; priv->stats[ctx_id].bytes += bytes; - max_t(u64, priv->stats[ctx_id].used, used); + priv->stats[ctx_id].used = max_t(u64, used, + priv->stats[ctx_id].used); } } @@ -1618,8 +1619,8 @@ nfp_flower_indr_block_cb_priv_lookup(struct nfp_app *app, return NULL; } -static int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, - void *type_data, void *cb_priv) +int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, + void *type_data, void *cb_priv) { struct nfp_flower_indr_block_cb_priv *priv = cb_priv; struct flow_cls_offload *flower = type_data; @@ -1707,10 +1708,13 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app, return 0; } -static int +int nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, enum tc_setup_type type, void *type_data) { + if (!nfp_fl_is_netdev_to_offload(netdev)) + return -EOPNOTSUPP; + switch (type) { case TC_SETUP_BLOCK: return nfp_flower_setup_indr_tc_block(netdev, cb_priv, @@ -1719,29 +1723,3 @@ nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, return -EOPNOTSUPP; } } - -int nfp_flower_reg_indir_block_handler(struct nfp_app *app, - struct net_device *netdev, - unsigned long event) -{ - int err; - - if (!nfp_fl_is_netdev_to_offload(netdev)) - return NOTIFY_OK; - - if (event == NETDEV_REGISTER) { - err = __flow_indr_block_cb_register(netdev, app, - nfp_flower_indr_setup_tc_cb, - app); - if (err) - nfp_flower_cmsg_warn(app, - "Indirect block reg failed - %s\n", - netdev->name); - } else if (event == NETDEV_UNREGISTER) { - __flow_indr_block_cb_unregister(netdev, - nfp_flower_indr_setup_tc_cb, - app); - } - - return NOTIFY_OK; -} diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c index 2a533280b124..29b9c728a65e 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c @@ -3651,7 +3651,7 @@ int qlcnic_83xx_interrupt_test(struct net_device *netdev) ahw->diag_cnt = 0; ret = qlcnic_alloc_mbx_args(&cmd, adapter, QLCNIC_CMD_INTRPT_TEST); if (ret) - goto fail_diag_irq; + goto fail_mbx_args; if (adapter->flags & QLCNIC_MSIX_ENABLED) intrpt_id = ahw->intr_tbl[0].id; @@ -3681,6 +3681,8 @@ int qlcnic_83xx_interrupt_test(struct net_device *netdev) done: qlcnic_free_mbx_args(&cmd); + +fail_mbx_args: qlcnic_83xx_diag_free_res(netdev, drv_sds_rings); fail_diag_irq: diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index b6f92c719553..73677c3b33b6 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -630,7 +630,8 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; ptp_v2 = PTP_TCR_TSVER2ENA; snap_type_sel = PTP_TCR_SNAPTYPSEL_1; - ts_event_en = PTP_TCR_TSEVNTENA; + if (priv->synopsys_id != DWMAC_CORE_5_10) + ts_event_en = PTP_TCR_TSEVNTENA; ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA; ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA; ptp_over_ethernet = PTP_TCR_TSIPENA; diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c index dc3ff0e20944..ec6b6f7818ac 100644 --- a/drivers/net/netdevsim/dev.c +++ b/drivers/net/netdevsim/dev.c @@ -431,6 +431,10 @@ enum { DEVLINK_TRAP_GENERIC(EXCEPTION, TRAP, _id, \ DEVLINK_TRAP_GROUP_GENERIC_ID_##_group_id, \ NSIM_TRAP_METADATA) +#define NSIM_TRAP_CONTROL(_id, _group_id, _action) \ + DEVLINK_TRAP_GENERIC(CONTROL, _action, _id, \ + DEVLINK_TRAP_GROUP_GENERIC_ID_##_group_id, \ + NSIM_TRAP_METADATA) #define NSIM_TRAP_DRIVER_EXCEPTION(_id, _group_id) \ DEVLINK_TRAP_DRIVER(EXCEPTION, TRAP, NSIM_TRAP_ID_##_id, \ NSIM_TRAP_NAME_##_id, \ @@ -458,8 +462,10 @@ static const struct devlink_trap_policer nsim_trap_policers_arr[] = { static const struct devlink_trap_group nsim_trap_groups_arr[] = { DEVLINK_TRAP_GROUP_GENERIC(L2_DROPS, 0), DEVLINK_TRAP_GROUP_GENERIC(L3_DROPS, 1), + DEVLINK_TRAP_GROUP_GENERIC(L3_EXCEPTIONS, 1), DEVLINK_TRAP_GROUP_GENERIC(BUFFER_DROPS, 2), DEVLINK_TRAP_GROUP_GENERIC(ACL_DROPS, 3), + DEVLINK_TRAP_GROUP_GENERIC(MC_SNOOPING, 3), }; static const struct devlink_trap nsim_traps_arr[] = { @@ -471,12 +477,14 @@ static const struct devlink_trap nsim_traps_arr[] = { NSIM_TRAP_DROP(PORT_LOOPBACK_FILTER, L2_DROPS), NSIM_TRAP_DRIVER_EXCEPTION(FID_MISS, L2_DROPS), NSIM_TRAP_DROP(BLACKHOLE_ROUTE, L3_DROPS), - NSIM_TRAP_EXCEPTION(TTL_ERROR, L3_DROPS), + NSIM_TRAP_EXCEPTION(TTL_ERROR, L3_EXCEPTIONS), NSIM_TRAP_DROP(TAIL_DROP, BUFFER_DROPS), NSIM_TRAP_DROP_EXT(INGRESS_FLOW_ACTION_DROP, ACL_DROPS, DEVLINK_TRAP_METADATA_TYPE_F_FA_COOKIE), NSIM_TRAP_DROP_EXT(EGRESS_FLOW_ACTION_DROP, ACL_DROPS, DEVLINK_TRAP_METADATA_TYPE_F_FA_COOKIE), + NSIM_TRAP_CONTROL(IGMP_QUERY, MC_SNOOPING, MIRROR), + NSIM_TRAP_CONTROL(IGMP_V1_REPORT, MC_SNOOPING, TRAP), }; #define NSIM_TRAP_L4_DATA_LEN 100 diff --git a/drivers/net/phy/bcm-phy-lib.c b/drivers/net/phy/bcm-phy-lib.c index cb92786e3ded..ef6825b30323 100644 --- a/drivers/net/phy/bcm-phy-lib.c +++ b/drivers/net/phy/bcm-phy-lib.c @@ -583,18 +583,16 @@ int bcm_phy_enable_jumbo(struct phy_device *phydev) } EXPORT_SYMBOL_GPL(bcm_phy_enable_jumbo); -int __bcm_phy_enable_rdb_access(struct phy_device *phydev) +static int __bcm_phy_enable_rdb_access(struct phy_device *phydev) { return __bcm_phy_write_exp(phydev, BCM54XX_EXP_REG7E, 0); } -EXPORT_SYMBOL_GPL(__bcm_phy_enable_rdb_access); -int __bcm_phy_enable_legacy_access(struct phy_device *phydev) +static int __bcm_phy_enable_legacy_access(struct phy_device *phydev) { return __bcm_phy_write_rdb(phydev, BCM54XX_RDB_REG0087, BCM54XX_ACCESS_MODE_LEGACY_EN); } -EXPORT_SYMBOL_GPL(__bcm_phy_enable_legacy_access); static int _bcm_phy_cable_test_start(struct phy_device *phydev, bool is_rdb) { diff --git a/drivers/net/tun.c b/drivers/net/tun.c index c54f967e2c66..b0ab882c021e 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1872,8 +1872,11 @@ drop: skb->dev = tun->dev; break; case IFF_TAP: - if (!frags) - skb->protocol = eth_type_trans(skb, tun->dev); + if (frags && !pskb_may_pull(skb, ETH_HLEN)) { + err = -ENOMEM; + goto drop; + } + skb->protocol = eth_type_trans(skb, tun->dev); break; } @@ -1930,9 +1933,12 @@ drop: } if (frags) { + u32 headlen; + /* Exercise flow dissector code path. */ - u32 headlen = eth_get_headlen(tun->dev, skb->data, - skb_headlen(skb)); + skb_push(skb, ETH_HLEN); + headlen = eth_get_headlen(tun->dev, skb->data, + skb_headlen(skb)); if (unlikely(headlen > skb_headlen(skb))) { this_cpu_inc(tun->pcpu_stats->rx_dropped); diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index b0eab6e5279d..31b1d4b959f6 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -1324,6 +1324,7 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c index bfdda0f34b97..6acaafe169de 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethtool.c +++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c @@ -954,6 +954,8 @@ vmxnet3_get_rss(struct net_device *netdev, u32 *p, u8 *key, u8 *hfunc) *hfunc = ETH_RSS_HASH_TOP; if (!p) return 0; + if (n > UPT1_RSS_MAX_IND_TABLE_SIZE) + return 0; while (n--) p[n] = rssConf->indTable[n]; return 0; diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c index 39bc10a7fd2e..5bb448ae6c9c 100644 --- a/drivers/net/vxlan.c +++ b/drivers/net/vxlan.c @@ -881,13 +881,13 @@ static int vxlan_fdb_nh_update(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb, goto err_inval; } - if (!nh->is_group || !nh->nh_grp->mpath) { + nhg = rtnl_dereference(nh->nh_grp); + if (!nh->is_group || !nhg->mpath) { NL_SET_ERR_MSG(extack, "Nexthop is not a multipath group"); goto err_inval; } /* check nexthop group family */ - nhg = rtnl_dereference(nh->nh_grp); switch (vxlan->default_dst.remote_ip.sa.sa_family) { case AF_INET: if (!nhg->has_v4) { @@ -2092,6 +2092,10 @@ static struct sk_buff *vxlan_na_create(struct sk_buff *request, ns_olen = request->len - skb_network_offset(request) - sizeof(struct ipv6hdr) - sizeof(*ns); for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) { + if (!ns->opt[i + 1]) { + kfree_skb(reply); + return NULL; + } if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) { daddr = ns->opt + i + sizeof(struct nd_opt_hdr); break; diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index f4ded2f2ee3b..1356e8cbe617 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -3054,6 +3054,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_TDLS | WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL | WIPHY_FLAG_AP_UAPSD | + WIPHY_FLAG_SUPPORTS_5_10_MHZ | WIPHY_FLAG_HAS_CHANNEL_SWITCH; hw->wiphy->features |= NL80211_FEATURE_ACTIVE_MONITOR | NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE | diff --git a/drivers/nfc/st21nfca/dep.c b/drivers/nfc/st21nfca/dep.c index a1d69f9b2d4a..0b9ca6d20ffa 100644 --- a/drivers/nfc/st21nfca/dep.c +++ b/drivers/nfc/st21nfca/dep.c @@ -173,8 +173,10 @@ static int st21nfca_tm_send_atr_res(struct nfc_hci_dev *hdev, memcpy(atr_res->gbi, atr_req->gbi, gb_len); r = nfc_set_remote_general_bytes(hdev->ndev, atr_res->gbi, gb_len); - if (r < 0) + if (r < 0) { + kfree_skb(skb); return r; + } } info->dep_info.curr_nfc_dep_pni = 0; diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 3726dc780d15..cc46e250fcac 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1382,16 +1382,19 @@ static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown) /* * Called only on a device that has been disabled and after all other threads - * that can check this device's completion queues have synced. This is the - * last chance for the driver to see a natural completion before - * nvme_cancel_request() terminates all incomplete requests. + * that can check this device's completion queues have synced, except + * nvme_poll(). This is the last chance for the driver to see a natural + * completion before nvme_cancel_request() terminates all incomplete requests. */ static void nvme_reap_pending_cqes(struct nvme_dev *dev) { int i; - for (i = dev->ctrl.queue_count - 1; i > 0; i--) + for (i = dev->ctrl.queue_count - 1; i > 0; i--) { + spin_lock(&dev->queues[i].cq_poll_lock); nvme_process_cq(&dev->queues[i]); + spin_unlock(&dev->queues[i].cq_poll_lock); + } } static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues, diff --git a/drivers/soc/mediatek/mtk-cmdq-helper.c b/drivers/soc/mediatek/mtk-cmdq-helper.c index db37144ae98c..87ee9f767b7a 100644 --- a/drivers/soc/mediatek/mtk-cmdq-helper.c +++ b/drivers/soc/mediatek/mtk-cmdq-helper.c @@ -351,7 +351,9 @@ int cmdq_pkt_flush_async(struct cmdq_pkt *pkt, cmdq_async_flush_cb cb, spin_unlock_irqrestore(&client->lock, flags); } - mbox_send_message(client->chan, pkt); + err = mbox_send_message(client->chan, pkt); + if (err < 0) + return err; /* We can send next packet immediately, so just call txdone. */ mbox_client_txdone(client->chan, 0); diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 13f25e241ac4..25d489bc9453 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1733,7 +1733,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t, (!regset->active || regset->active(t->task, regset) > 0)) { int ret; size_t size = regset_size(t->task, regset); - void *data = kmalloc(size, GFP_KERNEL); + void *data = kzalloc(size, GFP_KERNEL); if (unlikely(!data)) return 0; ret = regset->get(t->task, regset, diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 5f3aa4d607de..f1acde6fb9a6 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -3991,7 +3991,7 @@ void ceph_handle_caps(struct ceph_mds_session *session, __ceph_queue_cap_release(session, cap); spin_unlock(&session->s_cap_lock); } - goto done; + goto flush_cap_releases; } /* these will work even if we don't have a cap yet */ diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c index 48b54ec1c793..cb2a11b458c6 100644 --- a/fs/gfs2/lops.c +++ b/fs/gfs2/lops.c @@ -509,12 +509,12 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head, unsigned int bsize = sdp->sd_sb.sb_bsize, off; unsigned int bsize_shift = sdp->sd_sb.sb_bsize_shift; unsigned int shift = PAGE_SHIFT - bsize_shift; - unsigned int max_bio_size = 2 * 1024 * 1024; + unsigned int max_blocks = 2 * 1024 * 1024 >> bsize_shift; struct gfs2_journal_extent *je; int sz, ret = 0; struct bio *bio = NULL; struct page *page = NULL; - bool bio_chained = false, done = false; + bool done = false; errseq_t since; memset(head, 0, sizeof(*head)); @@ -537,10 +537,7 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head, off = 0; } - if (!bio || (bio_chained && !off) || - bio->bi_iter.bi_size >= max_bio_size) { - /* start new bio */ - } else { + if (bio && (off || block < blocks_submitted + max_blocks)) { sector_t sector = dblock << sdp->sd_fsb2bb_shift; if (bio_end_sector(bio) == sector) { @@ -553,19 +550,17 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head, (PAGE_SIZE - off) >> bsize_shift; bio = gfs2_chain_bio(bio, blocks); - bio_chained = true; goto add_block_to_new_bio; } } if (bio) { - blocks_submitted = block + 1; + blocks_submitted = block; submit_bio(bio); } bio = gfs2_log_alloc_bio(sdp, dblock, gfs2_end_log_read); bio->bi_opf = REQ_OP_READ; - bio_chained = false; add_block_to_new_bio: sz = bio_add_page(bio, page, bsize, off); BUG_ON(sz != bsize); @@ -573,7 +568,7 @@ block_added: off += bsize; if (off == PAGE_SIZE) page = NULL; - if (blocks_submitted < 2 * max_bio_size >> bsize_shift) { + if (blocks_submitted <= blocks_read + max_blocks) { /* Keep at least one bio in flight */ continue; } diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c index 5435a40f82be..c18459cea6f4 100644 --- a/fs/notify/fanotify/fanotify.c +++ b/fs/notify/fanotify/fanotify.c @@ -520,7 +520,7 @@ static int fanotify_handle_event(struct fsnotify_group *group, BUILD_BUG_ON(FAN_OPEN_EXEC != FS_OPEN_EXEC); BUILD_BUG_ON(FAN_OPEN_EXEC_PERM != FS_OPEN_EXEC_PERM); - BUILD_BUG_ON(HWEIGHT32(ALL_FANOTIFY_EVENT_BITS) != 20); + BUILD_BUG_ON(HWEIGHT32(ALL_FANOTIFY_EVENT_BITS) != 19); mask = fanotify_group_event_mask(group, iter_info, mask, data, data_type); diff --git a/fs/xattr.c b/fs/xattr.c index e13265e65871..91608d9bfc6a 100644 --- a/fs/xattr.c +++ b/fs/xattr.c @@ -876,6 +876,9 @@ int simple_xattr_set(struct simple_xattrs *xattrs, const char *name, struct simple_xattr *new_xattr = NULL; int err = 0; + if (removed_size) + *removed_size = -1; + /* value == NULL means remove */ if (value) { new_xattr = simple_xattr_alloc(value, size); @@ -914,9 +917,6 @@ int simple_xattr_set(struct simple_xattrs *xattrs, const char *name, list_add(&new_xattr->list, &xattrs->head); xattr = NULL; } - - if (removed_size) - *removed_size = -1; out: spin_unlock(&xattrs->lock); if (xattr) { diff --git a/include/asm-generic/topology.h b/include/asm-generic/topology.h index 238873739550..5aa8705df87e 100644 --- a/include/asm-generic/topology.h +++ b/include/asm-generic/topology.h @@ -48,7 +48,7 @@ #ifdef CONFIG_NEED_MULTIPLE_NODES #define cpumask_of_node(node) ((node) == 0 ? cpu_online_mask : cpu_none_mask) #else - #define cpumask_of_node(node) ((void)node, cpu_online_mask) + #define cpumask_of_node(node) ((void)(node), cpu_online_mask) #endif #endif #ifndef pcibus_to_node diff --git a/include/linux/device_cgroup.h b/include/linux/device_cgroup.h index fa35b52e0002..9a72214496e5 100644 --- a/include/linux/device_cgroup.h +++ b/include/linux/device_cgroup.h @@ -1,6 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include <linux/fs.h> -#include <linux/bpf-cgroup.h> #define DEVCG_ACC_MKNOD 1 #define DEVCG_ACC_READ 2 @@ -11,16 +10,10 @@ #define DEVCG_DEV_CHAR 2 #define DEVCG_DEV_ALL 4 /* this represents all devices */ -#ifdef CONFIG_CGROUP_DEVICE -int devcgroup_check_permission(short type, u32 major, u32 minor, - short access); -#else -static inline int devcgroup_check_permission(short type, u32 major, u32 minor, - short access) -{ return 0; } -#endif #if defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF) +int devcgroup_check_permission(short type, u32 major, u32 minor, + short access); static inline int devcgroup_inode_permission(struct inode *inode, int mask) { short type, access = 0; @@ -61,6 +54,9 @@ static inline int devcgroup_inode_mknod(int mode, dev_t dev) } #else +static inline int devcgroup_check_permission(short type, u32 major, u32 minor, + short access) +{ return 0; } static inline int devcgroup_inode_permission(struct inode *inode, int mask) { return 0; } static inline int devcgroup_inode_mknod(int mode, dev_t dev) diff --git a/include/linux/fanotify.h b/include/linux/fanotify.h index 3049a6c06d9e..b79fa9bb7359 100644 --- a/include/linux/fanotify.h +++ b/include/linux/fanotify.h @@ -47,8 +47,7 @@ * Directory entry modification events - reported only to directory * where entry is modified and not to a watching parent. */ -#define FANOTIFY_DIRENT_EVENTS (FAN_MOVE | FAN_CREATE | FAN_DELETE | \ - FAN_DIR_MODIFY) +#define FANOTIFY_DIRENT_EVENTS (FAN_MOVE | FAN_CREATE | FAN_DELETE) /* Events that can only be reported with data type FSNOTIFY_EVENT_INODE */ #define FANOTIFY_INODE_EVENTS (FANOTIFY_DIRENT_EVENTS | \ diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h index a561db435a4b..fe15f831841b 100644 --- a/include/linux/ieee80211.h +++ b/include/linux/ieee80211.h @@ -105,6 +105,51 @@ /* extension, added by 802.11ad */ #define IEEE80211_STYPE_DMG_BEACON 0x0000 +#define IEEE80211_STYPE_S1G_BEACON 0x0010 + +/* bits unique to S1G beacon */ +#define IEEE80211_S1G_BCN_NEXT_TBTT 0x100 + +/* see 802.11ah-2016 9.9 NDP CMAC frames */ +#define IEEE80211_S1G_1MHZ_NDP_BITS 25 +#define IEEE80211_S1G_1MHZ_NDP_BYTES 4 +#define IEEE80211_S1G_2MHZ_NDP_BITS 37 +#define IEEE80211_S1G_2MHZ_NDP_BYTES 5 + +#define IEEE80211_NDP_FTYPE_CTS 0 +#define IEEE80211_NDP_FTYPE_CF_END 0 +#define IEEE80211_NDP_FTYPE_PS_POLL 1 +#define IEEE80211_NDP_FTYPE_ACK 2 +#define IEEE80211_NDP_FTYPE_PS_POLL_ACK 3 +#define IEEE80211_NDP_FTYPE_BA 4 +#define IEEE80211_NDP_FTYPE_BF_REPORT_POLL 5 +#define IEEE80211_NDP_FTYPE_PAGING 6 +#define IEEE80211_NDP_FTYPE_PREQ 7 + +#define SM64(f, v) ((((u64)v) << f##_S) & f) + +/* NDP CMAC frame fields */ +#define IEEE80211_NDP_FTYPE 0x0000000000000007 +#define IEEE80211_NDP_FTYPE_S 0x0000000000000000 + +/* 1M Probe Request 11ah 9.9.3.1.1 */ +#define IEEE80211_NDP_1M_PREQ_ANO 0x0000000000000008 +#define IEEE80211_NDP_1M_PREQ_ANO_S 3 +#define IEEE80211_NDP_1M_PREQ_CSSID 0x00000000000FFFF0 +#define IEEE80211_NDP_1M_PREQ_CSSID_S 4 +#define IEEE80211_NDP_1M_PREQ_RTYPE 0x0000000000100000 +#define IEEE80211_NDP_1M_PREQ_RTYPE_S 20 +#define IEEE80211_NDP_1M_PREQ_RSV 0x0000000001E00000 +#define IEEE80211_NDP_1M_PREQ_RSV 0x0000000001E00000 +/* 2M Probe Request 11ah 9.9.3.1.2 */ +#define IEEE80211_NDP_2M_PREQ_ANO 0x0000000000000008 +#define IEEE80211_NDP_2M_PREQ_ANO_S 3 +#define IEEE80211_NDP_2M_PREQ_CSSID 0x0000000FFFFFFFF0 +#define IEEE80211_NDP_2M_PREQ_CSSID_S 4 +#define IEEE80211_NDP_2M_PREQ_RTYPE 0x0000001000000000 +#define IEEE80211_NDP_2M_PREQ_RTYPE_S 36 + +#define IEEE80211_ANO_NETTYPE_WILD 15 /* control extension - for IEEE80211_FTYPE_CTL | IEEE80211_STYPE_CTL_EXT */ #define IEEE80211_CTL_EXT_POLL 0x2000 @@ -121,6 +166,21 @@ #define IEEE80211_MAX_SN IEEE80211_SN_MASK #define IEEE80211_SN_MODULO (IEEE80211_MAX_SN + 1) + +/* PV1 Layout 11ah 9.8.3.1 */ +#define IEEE80211_PV1_FCTL_VERS 0x0003 +#define IEEE80211_PV1_FCTL_FTYPE 0x001c +#define IEEE80211_PV1_FCTL_STYPE 0x00e0 +#define IEEE80211_PV1_FCTL_TODS 0x0100 +#define IEEE80211_PV1_FCTL_MOREFRAGS 0x0200 +#define IEEE80211_PV1_FCTL_PM 0x0400 +#define IEEE80211_PV1_FCTL_MOREDATA 0x0800 +#define IEEE80211_PV1_FCTL_PROTECTED 0x1000 +#define IEEE80211_PV1_FCTL_END_SP 0x2000 +#define IEEE80211_PV1_FCTL_RELAYED 0x4000 +#define IEEE80211_PV1_FCTL_ACK_POLICY 0x8000 +#define IEEE80211_PV1_FCTL_CTL_EXT 0x0f00 + static inline bool ieee80211_sn_less(u16 sn1, u16 sn2) { return ((sn1 - sn2) & IEEE80211_SN_MASK) > (IEEE80211_SN_MODULO >> 1); @@ -148,6 +208,7 @@ static inline u16 ieee80211_sn_sub(u16 sn1, u16 sn2) #define IEEE80211_MAX_FRAG_THRESHOLD 2352 #define IEEE80211_MAX_RTS_THRESHOLD 2353 #define IEEE80211_MAX_AID 2007 +#define IEEE80211_MAX_AID_S1G 8191 #define IEEE80211_MAX_TIM_LEN 251 #define IEEE80211_MAX_MESH_PEERINGS 63 /* Maximum size for the MA-UNITDATA primitive, 802.11 standard section @@ -372,6 +433,17 @@ static inline bool ieee80211_is_data(__le16 fc) } /** + * ieee80211_is_ext - check if type is IEEE80211_FTYPE_EXT + * @fc: frame control bytes in little-endian byteorder + */ +static inline bool ieee80211_is_ext(__le16 fc) +{ + return (fc & cpu_to_le16(IEEE80211_FCTL_FTYPE)) == + cpu_to_le16(IEEE80211_FTYPE_EXT); +} + + +/** * ieee80211_is_data_qos - check if type is IEEE80211_FTYPE_DATA and IEEE80211_STYPE_QOS_DATA is set * @fc: frame control bytes in little-endian byteorder */ @@ -470,6 +542,18 @@ static inline bool ieee80211_is_beacon(__le16 fc) } /** + * ieee80211_is_s1g_beacon - check if IEEE80211_FTYPE_EXT && + * IEEE80211_STYPE_S1G_BEACON + * @fc: frame control bytes in little-endian byteorder + */ +static inline bool ieee80211_is_s1g_beacon(__le16 fc) +{ + return (fc & cpu_to_le16(IEEE80211_FCTL_FTYPE | + IEEE80211_FCTL_STYPE)) == + cpu_to_le16(IEEE80211_FTYPE_EXT | IEEE80211_STYPE_S1G_BEACON); +} + +/** * ieee80211_is_atim - check if IEEE80211_FTYPE_MGMT && IEEE80211_STYPE_ATIM * @fc: frame control bytes in little-endian byteorder */ @@ -716,7 +800,7 @@ struct ieee80211_msrment_ie { u8 token; u8 mode; u8 type; - u8 request[0]; + u8 request[]; } __packed; /** @@ -900,6 +984,59 @@ struct ieee80211_addba_ext_ie { u8 data; } __packed; +/** + * struct ieee80211_s1g_bcn_compat_ie + * + * S1G Beacon Compatibility element + */ +struct ieee80211_s1g_bcn_compat_ie { + __le16 compat_info; + __le16 beacon_int; + __le32 tsf_completion; +} __packed; + +/** + * struct ieee80211_s1g_oper_ie + * + * S1G Operation element + */ +struct ieee80211_s1g_oper_ie { + u8 ch_width; + u8 oper_class; + u8 primary_ch; + u8 oper_ch; + __le16 basic_mcs_nss; +} __packed; + +/** + * struct ieee80211_aid_response_ie + * + * AID Response element + */ +struct ieee80211_aid_response_ie { + __le16 aid; + u8 switch_count; + __le16 response_int; +} __packed; + +struct ieee80211_s1g_cap { + u8 capab_info[10]; + u8 supp_mcs_nss[5]; +} __packed; + +struct ieee80211_ext { + __le16 frame_control; + __le16 duration; + union { + struct { + u8 sa[ETH_ALEN]; + __le32 timestamp; + u8 change_seq; + u8 variable[0]; + } __packed s1g_beacon; + } u; +} __packed __aligned(2); + struct ieee80211_mgmt { __le16 frame_control; __le16 duration; @@ -1644,7 +1781,7 @@ struct ieee80211_he_operation { __le32 he_oper_params; __le16 he_mcs_nss_set; /* Optional 0,1,3,4,5,7 or 8 bytes: depends on @he_oper_params */ - u8 optional[0]; + u8 optional[]; } __packed; /** @@ -1656,7 +1793,7 @@ struct ieee80211_he_operation { struct ieee80211_he_spr { u8 he_sr_control; /* Optional 0 to 19 bytes: depends on @he_sr_control */ - u8 optional[0]; + u8 optional[]; } __packed; /** @@ -1821,6 +1958,8 @@ int ieee80211_get_vht_max_nss(struct ieee80211_vht_cap *cap, #define IEEE80211_HE_MAC_CAP3_FLEX_TWT_SCHED 0x40 #define IEEE80211_HE_MAC_CAP3_RX_CTRL_FRAME_TO_MULTIBSS 0x80 +#define IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_SHIFT 3 + #define IEEE80211_HE_MAC_CAP4_BSRP_BQRP_A_MPDU_AGG 0x01 #define IEEE80211_HE_MAC_CAP4_QTP 0x02 #define IEEE80211_HE_MAC_CAP4_BQR 0x04 @@ -1842,6 +1981,9 @@ int ieee80211_get_vht_max_nss(struct ieee80211_vht_cap *cap, #define IEEE80211_HE_MAC_CAP5_PUNCTURED_SOUNDING 0x40 #define IEEE80211_HE_MAC_CAP5_HT_VHT_TRIG_FRAME_RX 0x80 +#define IEEE80211_HE_VHT_MAX_AMPDU_FACTOR 20 +#define IEEE80211_HE_HT_MAX_AMPDU_FACTOR 16 + /* 802.11ax HE PHY capabilities */ #define IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G 0x02 #define IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G 0x04 @@ -2054,7 +2196,7 @@ ieee80211_he_ppe_size(u8 ppe_thres_hdr, const u8 *phy_cap_info) } /* HE Operation defines */ -#define IEEE80211_HE_OPERATION_DFLT_PE_DURATION_MASK 0x00000003 +#define IEEE80211_HE_OPERATION_DFLT_PE_DURATION_MASK 0x00000007 #define IEEE80211_HE_OPERATION_TWT_REQUIRED 0x00000008 #define IEEE80211_HE_OPERATION_RTS_THRESHOLD_MASK 0x00003ff0 #define IEEE80211_HE_OPERATION_RTS_THRESHOLD_OFFSET 4 @@ -2067,6 +2209,28 @@ ieee80211_he_ppe_size(u8 ppe_thres_hdr, const u8 *phy_cap_info) #define IEEE80211_HE_OPERATION_PARTIAL_BSS_COLOR 0x40000000 #define IEEE80211_HE_OPERATION_BSS_COLOR_DISABLED 0x80000000 +/** + * ieee80211_he_6ghz_oper - HE 6 GHz operation Information field + * @primary: primary channel + * @control: control flags + * @ccfs0: channel center frequency segment 0 + * @ccfs1: channel center frequency segment 1 + * @minrate: minimum rate (in 1 Mbps units) + */ +struct ieee80211_he_6ghz_oper { + u8 primary; +#define IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH 0x3 +#define IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_20MHZ 0 +#define IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_40MHZ 1 +#define IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_80MHZ 2 +#define IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_160MHZ 3 +#define IEEE80211_HE_6GHZ_OPER_CTRL_DUP_BEACON 0x4 + u8 control; + u8 ccfs0; + u8 ccfs1; + u8 minrate; +} __packed; + /* * ieee80211_he_oper_size - calculate 802.11ax HE Operations IE size * @he_oper_ie: byte data of the He Operations IE, stating from the byte @@ -2093,7 +2257,7 @@ ieee80211_he_oper_size(const u8 *he_oper_ie) if (he_oper_params & IEEE80211_HE_OPERATION_CO_HOSTED_BSS) oper_len++; if (he_oper_params & IEEE80211_HE_OPERATION_6GHZ_OP_INFO) - oper_len += 4; + oper_len += sizeof(struct ieee80211_he_6ghz_oper); /* Add the first byte (extension ID) to the total length */ oper_len++; @@ -2101,6 +2265,34 @@ ieee80211_he_oper_size(const u8 *he_oper_ie) return oper_len; } +/** + * ieee80211_he_6ghz_oper - obtain 6 GHz operation field + * @he_oper: HE operation element (must be pre-validated for size) + * but may be %NULL + * + * Return: a pointer to the 6 GHz operation field, or %NULL + */ +static inline const struct ieee80211_he_6ghz_oper * +ieee80211_he_6ghz_oper(const struct ieee80211_he_operation *he_oper) +{ + const u8 *ret = (void *)&he_oper->optional; + u32 he_oper_params; + + if (!he_oper) + return NULL; + + he_oper_params = le32_to_cpu(he_oper->he_oper_params); + + if (!(he_oper_params & IEEE80211_HE_OPERATION_6GHZ_OP_INFO)) + return NULL; + if (he_oper_params & IEEE80211_HE_OPERATION_VHT_OPER_INFO) + ret += 3; + if (he_oper_params & IEEE80211_HE_OPERATION_CO_HOSTED_BSS) + ret++; + + return (void *)ret; +} + /* HE Spatial Reuse defines */ #define IEEE80211_HE_SPR_NON_SRG_OFFSET_PRESENT 0x4 #define IEEE80211_HE_SPR_SRG_INFORMATION_PRESENT 0x8 @@ -2137,6 +2329,86 @@ ieee80211_he_spr_size(const u8 *he_spr_ie) return spr_len; } +/* S1G Capabilities Information field */ +#define S1G_CAPAB_B0_S1G_LONG BIT(0) +#define S1G_CAPAB_B0_SGI_1MHZ BIT(1) +#define S1G_CAPAB_B0_SGI_2MHZ BIT(2) +#define S1G_CAPAB_B0_SGI_4MHZ BIT(3) +#define S1G_CAPAB_B0_SGI_8MHZ BIT(4) +#define S1G_CAPAB_B0_SGI_16MHZ BIT(5) +#define S1G_CAPAB_B0_SUPP_CH_WIDTH_MASK (BIT(6) | BIT(7)) +#define S1G_CAPAB_B0_SUPP_CH_WIDTH_SHIFT 6 + +#define S1G_CAPAB_B1_RX_LDPC BIT(0) +#define S1G_CAPAB_B1_TX_STBC BIT(1) +#define S1G_CAPAB_B1_RX_STBC BIT(2) +#define S1G_CAPAB_B1_SU_BFER BIT(3) +#define S1G_CAPAB_B1_SU_BFEE BIT(4) +#define S1G_CAPAB_B1_BFEE_STS_MASK (BIT(5) | BIT(6) | BIT(7)) +#define S1G_CAPAB_B1_BFEE_STS_SHIFT 5 + +#define S1G_CAPAB_B2_SOUNDING_DIMENSIONS_MASK (BIT(0) | BIT(1) | BIT(2)) +#define S1G_CAPAB_B2_SOUNDING_DIMENSIONS_SHIFT 0 +#define S1G_CAPAB_B2_MU_BFER BIT(3) +#define S1G_CAPAB_B2_MU_BFEE BIT(4) +#define S1G_CAPAB_B2_PLUS_HTC_VHT BIT(5) +#define S1G_CAPAB_B2_TRAVELING_PILOT_MASK (BIT(6) | BIT(7)) +#define S1G_CAPAB_B2_TRAVELING_PILOT_SHIFT 6 + +#define S1G_CAPAB_B3_RD_RESPONDER BIT(0) +#define S1G_CAPAB_B3_HT_DELAYED_BA BIT(1) +#define S1G_CAPAB_B3_MAX_MPDU_LEN BIT(2) +#define S1G_CAPAB_B3_MAX_AMPDU_LEN_EXP_MASK (BIT(3) | BIT(4)) +#define S1G_CAPAB_B3_MAX_AMPDU_LEN_EXP_SHIFT 3 +#define S1G_CAPAB_B3_MIN_MPDU_START_MASK (BIT(5) | BIT(6) | BIT(7)) +#define S1G_CAPAB_B3_MIN_MPDU_START_SHIFT 5 + +#define S1G_CAPAB_B4_UPLINK_SYNC BIT(0) +#define S1G_CAPAB_B4_DYNAMIC_AID BIT(1) +#define S1G_CAPAB_B4_BAT BIT(2) +#define S1G_CAPAB_B4_TIME_ADE BIT(3) +#define S1G_CAPAB_B4_NON_TIM BIT(4) +#define S1G_CAPAB_B4_GROUP_AID BIT(5) +#define S1G_CAPAB_B4_STA_TYPE_MASK (BIT(6) | BIT(7)) +#define S1G_CAPAB_B4_STA_TYPE_SHIFT 6 + +#define S1G_CAPAB_B5_CENT_AUTH_CONTROL BIT(0) +#define S1G_CAPAB_B5_DIST_AUTH_CONTROL BIT(1) +#define S1G_CAPAB_B5_AMSDU BIT(2) +#define S1G_CAPAB_B5_AMPDU BIT(3) +#define S1G_CAPAB_B5_ASYMMETRIC_BA BIT(4) +#define S1G_CAPAB_B5_FLOW_CONTROL BIT(5) +#define S1G_CAPAB_B5_SECTORIZED_BEAM_MASK (BIT(6) | BIT(7)) +#define S1G_CAPAB_B5_SECTORIZED_BEAM_SHIFT 6 + +#define S1G_CAPAB_B6_OBSS_MITIGATION BIT(0) +#define S1G_CAPAB_B6_FRAGMENT_BA BIT(1) +#define S1G_CAPAB_B6_NDP_PS_POLL BIT(2) +#define S1G_CAPAB_B6_RAW_OPERATION BIT(3) +#define S1G_CAPAB_B6_PAGE_SLICING BIT(4) +#define S1G_CAPAB_B6_TXOP_SHARING_IMP_ACK BIT(5) +#define S1G_CAPAB_B6_VHT_LINK_ADAPT_MASK (BIT(6) | BIT(7)) +#define S1G_CAPAB_B6_VHT_LINK_ADAPT_SHIFT 6 + +#define S1G_CAPAB_B7_TACK_AS_PS_POLL BIT(0) +#define S1G_CAPAB_B7_DUP_1MHZ BIT(1) +#define S1G_CAPAB_B7_MCS_NEGOTIATION BIT(2) +#define S1G_CAPAB_B7_1MHZ_CTL_RESPONSE_PREAMBLE BIT(3) +#define S1G_CAPAB_B7_NDP_BFING_REPORT_POLL BIT(4) +#define S1G_CAPAB_B7_UNSOLICITED_DYN_AID BIT(5) +#define S1G_CAPAB_B7_SECTOR_TRAINING_OPERATION BIT(6) +#define S1G_CAPAB_B7_TEMP_PS_MODE_SWITCH BIT(7) + +#define S1G_CAPAB_B8_TWT_GROUPING BIT(0) +#define S1G_CAPAB_B8_BDT BIT(1) +#define S1G_CAPAB_B8_COLOR_MASK (BIT(2) | BIT(3) | BIT(4)) +#define S1G_CAPAB_B8_COLOR_SHIFT 2 +#define S1G_CAPAB_B8_TWT_REQUEST BIT(5) +#define S1G_CAPAB_B8_TWT_RESPOND BIT(6) +#define S1G_CAPAB_B8_PV1_FRAME BIT(7) + +#define S1G_CAPAB_B9_LINK_ADAPT_PER_CONTROL_RESPONSE BIT(0) + /* Authentication algorithms */ #define WLAN_AUTH_OPEN 0 #define WLAN_AUTH_SHARED_KEY 1 @@ -2532,8 +2804,14 @@ enum ieee80211_eid { WLAN_EID_QUIET_CHANNEL = 198, WLAN_EID_OPMODE_NOTIF = 199, + WLAN_EID_REDUCED_NEIGHBOR_REPORT = 201, + + WLAN_EID_S1G_BCN_COMPAT = 213, + WLAN_EID_S1G_SHORT_BCN_INTERVAL = 214, + WLAN_EID_S1G_CAPABILITIES = 217, WLAN_EID_VENDOR_SPECIFIC = 221, WLAN_EID_QOS_PARAMETER = 222, + WLAN_EID_S1G_OPERATION = 232, WLAN_EID_CAG_NUMBER = 237, WLAN_EID_AP_CSN = 239, WLAN_EID_FILS_INDICATION = 240, @@ -2561,9 +2839,19 @@ enum ieee80211_eid_ext { WLAN_EID_EXT_UORA = 37, WLAN_EID_EXT_HE_MU_EDCA = 38, WLAN_EID_EXT_HE_SPR = 39, + WLAN_EID_EXT_NDP_FEEDBACK_REPORT_PARAMSET = 41, + WLAN_EID_EXT_BSS_COLOR_CHG_ANN = 42, + WLAN_EID_EXT_QUIET_TIME_PERIOD_SETUP = 43, + WLAN_EID_EXT_ESS_REPORT = 45, + WLAN_EID_EXT_OPS = 46, + WLAN_EID_EXT_HE_BSS_LOAD = 47, WLAN_EID_EXT_MAX_CHANNEL_SWITCH_TIME = 52, WLAN_EID_EXT_MULTIPLE_BSSID_CONFIGURATION = 55, WLAN_EID_EXT_NON_INHERITANCE = 56, + WLAN_EID_EXT_KNOWN_BSSID = 57, + WLAN_EID_EXT_SHORT_SSID_LIST = 58, + WLAN_EID_EXT_HE_6GHZ_CAPA = 59, + WLAN_EID_EXT_UL_MU_POWER_CAPA = 60, }; /* Action category code */ @@ -2794,7 +3082,7 @@ enum ieee80211_tdls_actioncode { #define WLAN_EXT_CAPA10_OBSS_NARROW_BW_RU_TOLERANCE_SUPPORT BIT(7) /* Defines support for enhanced multi-bssid advertisement*/ -#define WLAN_EXT_CAPA11_EMA_SUPPORT BIT(1) +#define WLAN_EXT_CAPA11_EMA_SUPPORT BIT(3) /* TDLS specific payload type in the LLC/SNAP header */ #define WLAN_TDLS_SNAP_RFTYPE 0x2 @@ -3106,6 +3394,24 @@ struct ieee80211_tspec_ie { __le16 medium_time; } __packed; +struct ieee80211_he_6ghz_capa { + /* uses IEEE80211_HE_6GHZ_CAP_* below */ + __le16 capa; +} __packed; + +/* HE 6 GHz band capabilities */ +/* uses enum ieee80211_min_mpdu_spacing values */ +#define IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START 0x0007 +/* uses enum ieee80211_vht_max_ampdu_length_exp values */ +#define IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP 0x0038 +/* uses IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_* values */ +#define IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN 0x00c0 +/* WLAN_HT_CAP_SM_PS_* values */ +#define IEEE80211_HE_6GHZ_CAP_SM_PS 0x0600 +#define IEEE80211_HE_6GHZ_CAP_RD_RESPONDER 0x0800 +#define IEEE80211_HE_6GHZ_CAP_RX_ANTPAT_CONS 0x1000 +#define IEEE80211_HE_6GHZ_CAP_TX_ANTPAT_CONS 0x2000 + /** * ieee80211_get_qos_ctl - get pointer to qos control bytes * @hdr: the frame @@ -3333,6 +3639,8 @@ static inline int ieee80211_get_tdls_action(struct sk_buff *skb, u32 hdr_size) /* convert frequencies */ #define MHZ_TO_KHZ(freq) ((freq) * 1000) #define KHZ_TO_MHZ(freq) ((freq) / 1000) +#define PR_KHZ(f) KHZ_TO_MHZ(f), f % 1000 +#define KHZ_F "%d.%03d" /* convert powers */ #define DBI_TO_MBI(gain) ((gain) * 100) @@ -3447,4 +3755,30 @@ static inline bool for_each_element_completed(const struct element *element, #define WLAN_RSNX_CAPA_PROTECTED_TWT BIT(4) #define WLAN_RSNX_CAPA_SAE_H2E BIT(5) +/* + * reduced neighbor report, based on Draft P802.11ax_D5.0, + * section 9.4.2.170 + */ +#define IEEE80211_AP_INFO_TBTT_HDR_TYPE 0x03 +#define IEEE80211_AP_INFO_TBTT_HDR_FILTERED 0x04 +#define IEEE80211_AP_INFO_TBTT_HDR_COLOC 0x08 +#define IEEE80211_AP_INFO_TBTT_HDR_COUNT 0xF0 +#define IEEE80211_TBTT_INFO_OFFSET_BSSID_BSS_PARAM 8 +#define IEEE80211_TBTT_INFO_OFFSET_BSSID_SSSID_BSS_PARAM 12 + +#define IEEE80211_RNR_TBTT_PARAMS_OCT_RECOMMENDED 0x01 +#define IEEE80211_RNR_TBTT_PARAMS_SAME_SSID 0x02 +#define IEEE80211_RNR_TBTT_PARAMS_MULTI_BSSID 0x04 +#define IEEE80211_RNR_TBTT_PARAMS_TRANSMITTED_BSSID 0x08 +#define IEEE80211_RNR_TBTT_PARAMS_COLOC_ESS 0x10 +#define IEEE80211_RNR_TBTT_PARAMS_PROBE_ACTIVE 0x20 +#define IEEE80211_RNR_TBTT_PARAMS_COLOC_AP 0x40 + +struct ieee80211_neighbor_ap_info { + u8 tbtt_info_hdr; + u8 tbtt_info_len; + u8 op_class; + u8 channel; +} __packed; + #endif /* LINUX_IEEE80211_H */ diff --git a/include/linux/input/lm8333.h b/include/linux/input/lm8333.h index 79f918c6e8c5..906da5fc06e0 100644 --- a/include/linux/input/lm8333.h +++ b/include/linux/input/lm8333.h @@ -1,6 +1,6 @@ /* * public include for LM8333 keypad driver - same license as driver - * Copyright (C) 2012 Wolfram Sang, Pengutronix <w.sang@pengutronix.de> + * Copyright (C) 2012 Wolfram Sang, Pengutronix <kernel@pengutronix.de> */ #ifndef _LM8333_H diff --git a/include/linux/mm.h b/include/linux/mm.h index a7b1ef8ed970..7ad23d28c87d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -782,6 +782,11 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) extern void kvfree(const void *addr); +/* + * Mapcount of compound page as a whole, does not include mapped sub-pages. + * + * Must be called only for compound pages or any their tail sub-pages. + */ static inline int compound_mapcount(struct page *page) { VM_BUG_ON_PAGE(!PageCompound(page), page); @@ -801,10 +806,16 @@ static inline void page_mapcount_reset(struct page *page) int __page_mapcount(struct page *page); +/* + * Mapcount of 0-order page; when compound sub-page, includes + * compound_mapcount(). + * + * Result is undefined for pages which cannot be mapped into userspace. + * For example SLAB or special types of pages. See function page_has_type(). + * They use this place in struct page differently. + */ static inline int page_mapcount(struct page *page) { - VM_BUG_ON_PAGE(PageSlab(page), page); - if (unlikely(PageCompound(page))) return __page_mapcount(page); return atomic_read(&page->_mapcount) + 1; diff --git a/include/linux/netfilter/nf_conntrack_pptp.h b/include/linux/netfilter/nf_conntrack_pptp.h index fcc409de31a4..a28aa289afdc 100644 --- a/include/linux/netfilter/nf_conntrack_pptp.h +++ b/include/linux/netfilter/nf_conntrack_pptp.h @@ -10,7 +10,7 @@ #include <net/netfilter/nf_conntrack_expect.h> #include <uapi/linux/netfilter/nf_conntrack_tuple_common.h> -extern const char *const pptp_msg_name[]; +const char *pptp_msg_name(u_int16_t msg); /* state of the control session */ enum pptp_ctrlsess_state { diff --git a/include/linux/regmap.h b/include/linux/regmap.h index 40b07168fd8e..ddf0baff195d 100644 --- a/include/linux/regmap.h +++ b/include/linux/regmap.h @@ -1111,6 +1111,21 @@ bool regmap_reg_in_ranges(unsigned int reg, const struct regmap_range *ranges, unsigned int nranges); +static inline int regmap_set_bits(struct regmap *map, + unsigned int reg, unsigned int bits) +{ + return regmap_update_bits_base(map, reg, bits, bits, + NULL, false, false); +} + +static inline int regmap_clear_bits(struct regmap *map, + unsigned int reg, unsigned int bits) +{ + return regmap_update_bits_base(map, reg, bits, 0, NULL, false, false); +} + +int regmap_test_bits(struct regmap *map, unsigned int reg, unsigned int bits); + /** * struct reg_field - Description of an register field * @@ -1410,6 +1425,27 @@ static inline int regmap_update_bits_base(struct regmap *map, unsigned int reg, return -EINVAL; } +static inline int regmap_set_bits(struct regmap *map, + unsigned int reg, unsigned int bits) +{ + WARN_ONCE(1, "regmap API is disabled"); + return -EINVAL; +} + +static inline int regmap_clear_bits(struct regmap *map, + unsigned int reg, unsigned int bits) +{ + WARN_ONCE(1, "regmap API is disabled"); + return -EINVAL; +} + +static inline int regmap_test_bits(struct regmap *map, + unsigned int reg, unsigned int bits) +{ + WARN_ONCE(1, "regmap API is disabled"); + return -EINVAL; +} + static inline int regmap_field_update_bits_base(struct regmap_field *field, unsigned int mask, unsigned int val, bool *change, bool async, bool force) diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h index 6f6ade63b04c..e8a924eeea3d 100644 --- a/include/linux/virtio_net.h +++ b/include/linux/virtio_net.h @@ -31,6 +31,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb, { unsigned int gso_type = 0; unsigned int thlen = 0; + unsigned int p_off = 0; unsigned int ip_proto; if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) { @@ -68,7 +69,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb, if (!skb_partial_csum_set(skb, start, off)) return -EINVAL; - if (skb_transport_offset(skb) + thlen > skb_headlen(skb)) + p_off = skb_transport_offset(skb) + thlen; + if (p_off > skb_headlen(skb)) return -EINVAL; } else { /* gso packets without NEEDS_CSUM do not set transport_offset. @@ -92,23 +94,32 @@ retry: return -EINVAL; } - if (keys.control.thoff + thlen > skb_headlen(skb) || + p_off = keys.control.thoff + thlen; + if (p_off > skb_headlen(skb) || keys.basic.ip_proto != ip_proto) return -EINVAL; skb_set_transport_header(skb, keys.control.thoff); + } else if (gso_type) { + p_off = thlen; + if (p_off > skb_headlen(skb)) + return -EINVAL; } } if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) { u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size); + struct skb_shared_info *shinfo = skb_shinfo(skb); - skb_shinfo(skb)->gso_size = gso_size; - skb_shinfo(skb)->gso_type = gso_type; + /* Too small packets are not really GSO ones. */ + if (skb->len - p_off > gso_size) { + shinfo->gso_size = gso_size; + shinfo->gso_type = gso_type; - /* Header must be checked, and gso_segs computed. */ - skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY; - skb_shinfo(skb)->gso_segs = 0; + /* Header must be checked, and gso_segs computed. */ + shinfo->gso_type |= SKB_GSO_DODGY; + shinfo->gso_segs = 0; + } } return 0; diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index 5dcf85f186c6..cdd4f1db8670 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -1381,10 +1381,26 @@ static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status) conn->security_cfm_cb(conn, status); } -static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status, - __u8 encrypt) +static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status) { struct hci_cb *cb; + __u8 encrypt; + + if (conn->state == BT_CONFIG) { + if (status) + conn->state = BT_CONNECTED; + + hci_connect_cfm(conn, status); + hci_conn_drop(conn); + return; + } + + if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags)) + encrypt = 0x00; + else if (test_bit(HCI_CONN_AES_CCM, &conn->flags)) + encrypt = 0x02; + else + encrypt = 0x01; if (conn->sec_level == BT_SECURITY_SDP) conn->sec_level = BT_SECURITY_LOW; diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h index dada14d0622c..8f1e6a7a2df8 100644 --- a/include/net/bluetooth/l2cap.h +++ b/include/net/bluetooth/l2cap.h @@ -499,7 +499,7 @@ struct l2cap_ecred_conn_req { __le16 mtu; __le16 mps; __le16 credits; - __le16 scid[0]; + __le16 scid[]; } __packed; struct l2cap_ecred_conn_rsp { @@ -507,13 +507,13 @@ struct l2cap_ecred_conn_rsp { __le16 mps; __le16 credits; __le16 result; - __le16 dcid[0]; + __le16 dcid[]; }; struct l2cap_ecred_reconf_req { __le16 mtu; __le16 mps; - __le16 scid[0]; + __le16 scid[]; } __packed; #define L2CAP_RECONF_SUCCESS 0x0000 diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index 8b6d5c5184d1..b58ad1a3f695 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -354,10 +354,13 @@ struct ieee80211_sta_he_cap { * * @types_mask: interface types mask * @he_cap: holds the HE capabilities + * @he_6ghz_capa: HE 6 GHz capabilities, must be filled in for a + * 6 GHz band channel (and 0 may be valid value). */ struct ieee80211_sband_iftype_data { u16 types_mask; struct ieee80211_sta_he_cap he_cap; + struct ieee80211_he_6ghz_capa he_6ghz_capa; }; /** @@ -510,6 +513,26 @@ ieee80211_get_he_sta_cap(const struct ieee80211_supported_band *sband) } /** + * ieee80211_get_he_6ghz_capa - return HE 6 GHz capabilities + * @sband: the sband to search for the STA on + * @iftype: the iftype to search for + * + * Return: the 6GHz capabilities + */ +static inline __le16 +ieee80211_get_he_6ghz_capa(const struct ieee80211_supported_band *sband, + enum nl80211_iftype iftype) +{ + const struct ieee80211_sband_iftype_data *data = + ieee80211_get_sband_iftype_data(sband, iftype); + + if (WARN_ON(!data || !data->he_cap.has_he)) + return 0; + + return data->he_6ghz_capa.capa; +} + +/** * wiphy_read_of_freq_limits - read frequency limits from device tree * * @wiphy: the wireless device to get extra limits for @@ -630,6 +653,19 @@ struct cfg80211_chan_def { u16 freq1_offset; }; +/* + * cfg80211_bitrate_mask - masks for bitrate control + */ +struct cfg80211_bitrate_mask { + struct { + u32 legacy; + u8 ht_mcs[IEEE80211_HT_MCS_MASK_LEN]; + u16 vht_mcs[NL80211_VHT_NSS_MAX]; + enum nl80211_txrate_gi gi; + } control[NUM_NL80211_BANDS]; +}; + + /** * struct cfg80211_tid_cfg - TID specific configuration * @config_override: Flag to notify driver to reset TID configuration @@ -640,17 +676,23 @@ struct cfg80211_chan_def { * @noack: noack configuration value for the TID * @retry_long: retry count value * @retry_short: retry count value - * @ampdu: Enable/Disable aggregation + * @ampdu: Enable/Disable MPDU aggregation * @rtscts: Enable/Disable RTS/CTS + * @amsdu: Enable/Disable MSDU aggregation + * @txrate_type: Tx bitrate mask type + * @txrate_mask: Tx bitrate to be applied for the TID */ struct cfg80211_tid_cfg { bool config_override; u8 tids; - u32 mask; + u64 mask; enum nl80211_tid_config noack; u8 retry_long, retry_short; enum nl80211_tid_config ampdu; enum nl80211_tid_config rtscts; + enum nl80211_tid_config amsdu; + enum nl80211_tx_rate_setting txrate_type; + struct cfg80211_bitrate_mask txrate_mask; }; /** @@ -1005,18 +1047,6 @@ struct cfg80211_acl_data { struct mac_address mac_addrs[]; }; -/* - * cfg80211_bitrate_mask - masks for bitrate control - */ -struct cfg80211_bitrate_mask { - struct { - u32 legacy; - u8 ht_mcs[IEEE80211_HT_MCS_MASK_LEN]; - u16 vht_mcs[NL80211_VHT_NSS_MAX]; - enum nl80211_txrate_gi gi; - } control[NUM_NL80211_BANDS]; -}; - /** * enum cfg80211_ap_settings_flags - AP settings flags * @@ -1231,6 +1261,7 @@ struct sta_txpwr { * @he_capa_len: the length of the HE capabilities * @airtime_weight: airtime scheduler weight for this station * @txpwr: transmit power for an associated station + * @he_6ghz_capa: HE 6 GHz Band capabilities of station */ struct station_parameters { const u8 *supported_rates; @@ -1263,6 +1294,7 @@ struct station_parameters { u8 he_capa_len; u16 airtime_weight; struct sta_txpwr txpwr; + const struct ieee80211_he_6ghz_capa *he_6ghz_capa; }; /** @@ -2035,7 +2067,7 @@ struct cfg80211_scan_request { bool no_cck; /* keep last */ - struct ieee80211_channel *channels[0]; + struct ieee80211_channel *channels[]; }; static inline void get_random_mask_addr(u8 *buf, const u8 *addr, const u8 *mask) @@ -2181,7 +2213,7 @@ struct cfg80211_sched_scan_request { struct list_head list; /* keep last */ - struct ieee80211_channel *channels[0]; + struct ieee80211_channel *channels[]; }; /** @@ -2303,7 +2335,7 @@ struct cfg80211_bss { u8 bssid_index; u8 max_bssid_indicator; - u8 priv[0] __aligned(sizeof(void *)); + u8 priv[] __aligned(sizeof(void *)); }; /** @@ -2904,12 +2936,17 @@ struct cfg80211_wowlan_wakeup { /** * struct cfg80211_gtk_rekey_data - rekey data - * @kek: key encryption key (NL80211_KEK_LEN bytes) - * @kck: key confirmation key (NL80211_KCK_LEN bytes) + * @kek: key encryption key (@kek_len bytes) + * @kck: key confirmation key (@kck_len bytes) * @replay_ctr: replay counter (NL80211_REPLAY_CTR_LEN bytes) + * @kek_len: length of kek + * @kck_len length of kck + * @akm: akm (oui, id) */ struct cfg80211_gtk_rekey_data { const u8 *kek, *kck, *replay_ctr; + u32 akm; + u8 kek_len, kck_len; }; /** @@ -4067,7 +4104,8 @@ struct cfg80211_ops { struct net_device *dev, const u8 *buf, size_t len, const u8 *dest, const __be16 proto, - const bool noencrypt); + const bool noencrypt, + u64 *cookie); int (*get_ftm_responder_stats)(struct wiphy *wiphy, struct net_device *dev, @@ -4133,9 +4171,10 @@ struct cfg80211_ops { * beaconing mode (AP, IBSS, Mesh, ...). * @WIPHY_FLAG_HAS_STATIC_WEP: The device supports static WEP key installation * before connection. + * @WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK: The device supports bigger kek and kck keys */ enum wiphy_flags { - /* use hole at 0 */ + WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK = BIT(0), /* use hole at 1 */ /* use hole at 2 */ WIPHY_FLAG_NETNS_OK = BIT(3), @@ -4850,7 +4889,7 @@ struct wiphy { u8 max_data_retry_count; - char priv[0] __aligned(NETDEV_ALIGN); + char priv[] __aligned(NETDEV_ALIGN); }; static inline struct net *wiphy_net(struct wiphy *wiphy) @@ -5270,6 +5309,21 @@ ieee80211_get_channel(struct wiphy *wiphy, int freq) } /** + * cfg80211_channel_is_psc - Check if the channel is a 6 GHz PSC + * @chan: control channel to check + * + * The Preferred Scanning Channels (PSC) are defined in + * Draft IEEE P802.11ax/D5.0, 26.17.2.3.3 + */ +static inline bool cfg80211_channel_is_psc(struct ieee80211_channel *chan) +{ + if (chan->band != NL80211_BAND_6GHZ) + return false; + + return ieee80211_frequency_to_channel(chan->center_freq) % 16 == 5; +} + +/** * ieee80211_get_response_rate - get basic rate for a given rate * * @sband: the band to look for rates in @@ -6987,6 +7041,26 @@ void cfg80211_conn_failed(struct net_device *dev, const u8 *mac_addr, gfp_t gfp); /** + * cfg80211_rx_mgmt_khz - notification of received, unprocessed management frame + * @wdev: wireless device receiving the frame + * @freq: Frequency on which the frame was received in KHz + * @sig_dbm: signal strength in dBm, or 0 if unknown + * @buf: Management frame (header + body) + * @len: length of the frame data + * @flags: flags, as defined in enum nl80211_rxmgmt_flags + * + * This function is called whenever an Action frame is received for a station + * mode interface, but is not processed in kernel. + * + * Return: %true if a user space application has registered for this frame. + * For action frames, that makes it responsible for rejecting unrecognized + * action frames; %false otherwise, in which case for action frames the + * driver is responsible for rejecting the frame. + */ +bool cfg80211_rx_mgmt_khz(struct wireless_dev *wdev, int freq, int sig_dbm, + const u8 *buf, size_t len, u32 flags); + +/** * cfg80211_rx_mgmt - notification of received, unprocessed management frame * @wdev: wireless device receiving the frame * @freq: Frequency on which the frame was received in MHz @@ -7003,8 +7077,13 @@ void cfg80211_conn_failed(struct net_device *dev, const u8 *mac_addr, * action frames; %false otherwise, in which case for action frames the * driver is responsible for rejecting the frame. */ -bool cfg80211_rx_mgmt(struct wireless_dev *wdev, int freq, int sig_dbm, - const u8 *buf, size_t len, u32 flags); +static inline bool cfg80211_rx_mgmt(struct wireless_dev *wdev, int freq, + int sig_dbm, const u8 *buf, size_t len, + u32 flags) +{ + return cfg80211_rx_mgmt_khz(wdev, MHZ_TO_KHZ(freq), sig_dbm, buf, len, + flags); +} /** * cfg80211_mgmt_tx_status - notification of TX status for management frame @@ -7022,6 +7101,23 @@ bool cfg80211_rx_mgmt(struct wireless_dev *wdev, int freq, int sig_dbm, void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie, const u8 *buf, size_t len, bool ack, gfp_t gfp); +/** + * cfg80211_control_port_tx_status - notification of TX status for control + * port frames + * @wdev: wireless device receiving the frame + * @cookie: Cookie returned by cfg80211_ops::tx_control_port() + * @buf: Data frame (header + body) + * @len: length of the frame data + * @ack: Whether frame was acknowledged + * @gfp: context flags + * + * This function is called whenever a control port frame was requested to be + * transmitted with cfg80211_ops::tx_control_port() to report the TX status of + * the transmission attempt. + */ +void cfg80211_control_port_tx_status(struct wireless_dev *wdev, u64 cookie, + const u8 *buf, size_t len, bool ack, + gfp_t gfp); /** * cfg80211_rx_control_port - notification about a received control port frame @@ -7203,6 +7299,21 @@ void cfg80211_probe_status(struct net_device *dev, const u8 *addr, bool is_valid_ack_signal, gfp_t gfp); /** + * cfg80211_report_obss_beacon_khz - report beacon from other APs + * @wiphy: The wiphy that received the beacon + * @frame: the frame + * @len: length of the frame + * @freq: frequency the frame was received on in KHz + * @sig_dbm: signal strength in dBm, or 0 if unknown + * + * Use this function to report to userspace when a beacon was + * received. It is not useful to call this when there is no + * netdev that is in AP/GO mode. + */ +void cfg80211_report_obss_beacon_khz(struct wiphy *wiphy, const u8 *frame, + size_t len, int freq, int sig_dbm); + +/** * cfg80211_report_obss_beacon - report beacon from other APs * @wiphy: The wiphy that received the beacon * @frame: the frame @@ -7214,9 +7325,13 @@ void cfg80211_probe_status(struct net_device *dev, const u8 *addr, * received. It is not useful to call this when there is no * netdev that is in AP/GO mode. */ -void cfg80211_report_obss_beacon(struct wiphy *wiphy, - const u8 *frame, size_t len, - int freq, int sig_dbm); +static inline void cfg80211_report_obss_beacon(struct wiphy *wiphy, + const u8 *frame, size_t len, + int freq, int sig_dbm) +{ + cfg80211_report_obss_beacon_khz(wiphy, frame, len, MHZ_TO_KHZ(freq), + sig_dbm); +} /** * cfg80211_reg_can_beacon - check if beaconing is allowed diff --git a/include/net/devlink.h b/include/net/devlink.h index 8ffc1b5cd89b..1df6dfec26c2 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -645,6 +645,50 @@ enum devlink_trap_generic_id { DEVLINK_TRAP_GENERIC_ID_OVERLAY_SMAC_MC, DEVLINK_TRAP_GENERIC_ID_INGRESS_FLOW_ACTION_DROP, DEVLINK_TRAP_GENERIC_ID_EGRESS_FLOW_ACTION_DROP, + DEVLINK_TRAP_GENERIC_ID_STP, + DEVLINK_TRAP_GENERIC_ID_LACP, + DEVLINK_TRAP_GENERIC_ID_LLDP, + DEVLINK_TRAP_GENERIC_ID_IGMP_QUERY, + DEVLINK_TRAP_GENERIC_ID_IGMP_V1_REPORT, + DEVLINK_TRAP_GENERIC_ID_IGMP_V2_REPORT, + DEVLINK_TRAP_GENERIC_ID_IGMP_V3_REPORT, + DEVLINK_TRAP_GENERIC_ID_IGMP_V2_LEAVE, + DEVLINK_TRAP_GENERIC_ID_MLD_QUERY, + DEVLINK_TRAP_GENERIC_ID_MLD_V1_REPORT, + DEVLINK_TRAP_GENERIC_ID_MLD_V2_REPORT, + DEVLINK_TRAP_GENERIC_ID_MLD_V1_DONE, + DEVLINK_TRAP_GENERIC_ID_IPV4_DHCP, + DEVLINK_TRAP_GENERIC_ID_IPV6_DHCP, + DEVLINK_TRAP_GENERIC_ID_ARP_REQUEST, + DEVLINK_TRAP_GENERIC_ID_ARP_RESPONSE, + DEVLINK_TRAP_GENERIC_ID_ARP_OVERLAY, + DEVLINK_TRAP_GENERIC_ID_IPV6_NEIGH_SOLICIT, + DEVLINK_TRAP_GENERIC_ID_IPV6_NEIGH_ADVERT, + DEVLINK_TRAP_GENERIC_ID_IPV4_BFD, + DEVLINK_TRAP_GENERIC_ID_IPV6_BFD, + DEVLINK_TRAP_GENERIC_ID_IPV4_OSPF, + DEVLINK_TRAP_GENERIC_ID_IPV6_OSPF, + DEVLINK_TRAP_GENERIC_ID_IPV4_BGP, + DEVLINK_TRAP_GENERIC_ID_IPV6_BGP, + DEVLINK_TRAP_GENERIC_ID_IPV4_VRRP, + DEVLINK_TRAP_GENERIC_ID_IPV6_VRRP, + DEVLINK_TRAP_GENERIC_ID_IPV4_PIM, + DEVLINK_TRAP_GENERIC_ID_IPV6_PIM, + DEVLINK_TRAP_GENERIC_ID_UC_LB, + DEVLINK_TRAP_GENERIC_ID_LOCAL_ROUTE, + DEVLINK_TRAP_GENERIC_ID_EXTERNAL_ROUTE, + DEVLINK_TRAP_GENERIC_ID_IPV6_UC_DIP_LINK_LOCAL_SCOPE, + DEVLINK_TRAP_GENERIC_ID_IPV6_DIP_ALL_NODES, + DEVLINK_TRAP_GENERIC_ID_IPV6_DIP_ALL_ROUTERS, + DEVLINK_TRAP_GENERIC_ID_IPV6_ROUTER_SOLICIT, + DEVLINK_TRAP_GENERIC_ID_IPV6_ROUTER_ADVERT, + DEVLINK_TRAP_GENERIC_ID_IPV6_REDIRECT, + DEVLINK_TRAP_GENERIC_ID_IPV4_ROUTER_ALERT, + DEVLINK_TRAP_GENERIC_ID_IPV6_ROUTER_ALERT, + DEVLINK_TRAP_GENERIC_ID_PTP_EVENT, + DEVLINK_TRAP_GENERIC_ID_PTP_GENERAL, + DEVLINK_TRAP_GENERIC_ID_FLOW_ACTION_SAMPLE, + DEVLINK_TRAP_GENERIC_ID_FLOW_ACTION_TRAP, /* Add new generic trap IDs above */ __DEVLINK_TRAP_GENERIC_ID_MAX, @@ -657,9 +701,28 @@ enum devlink_trap_generic_id { enum devlink_trap_group_generic_id { DEVLINK_TRAP_GROUP_GENERIC_ID_L2_DROPS, DEVLINK_TRAP_GROUP_GENERIC_ID_L3_DROPS, + DEVLINK_TRAP_GROUP_GENERIC_ID_L3_EXCEPTIONS, DEVLINK_TRAP_GROUP_GENERIC_ID_BUFFER_DROPS, DEVLINK_TRAP_GROUP_GENERIC_ID_TUNNEL_DROPS, DEVLINK_TRAP_GROUP_GENERIC_ID_ACL_DROPS, + DEVLINK_TRAP_GROUP_GENERIC_ID_STP, + DEVLINK_TRAP_GROUP_GENERIC_ID_LACP, + DEVLINK_TRAP_GROUP_GENERIC_ID_LLDP, + DEVLINK_TRAP_GROUP_GENERIC_ID_MC_SNOOPING, + DEVLINK_TRAP_GROUP_GENERIC_ID_DHCP, + DEVLINK_TRAP_GROUP_GENERIC_ID_NEIGH_DISCOVERY, + DEVLINK_TRAP_GROUP_GENERIC_ID_BFD, + DEVLINK_TRAP_GROUP_GENERIC_ID_OSPF, + DEVLINK_TRAP_GROUP_GENERIC_ID_BGP, + DEVLINK_TRAP_GROUP_GENERIC_ID_VRRP, + DEVLINK_TRAP_GROUP_GENERIC_ID_PIM, + DEVLINK_TRAP_GROUP_GENERIC_ID_UC_LB, + DEVLINK_TRAP_GROUP_GENERIC_ID_LOCAL_DELIVERY, + DEVLINK_TRAP_GROUP_GENERIC_ID_IPV6, + DEVLINK_TRAP_GROUP_GENERIC_ID_PTP_EVENT, + DEVLINK_TRAP_GROUP_GENERIC_ID_PTP_GENERAL, + DEVLINK_TRAP_GROUP_GENERIC_ID_ACL_SAMPLE, + DEVLINK_TRAP_GROUP_GENERIC_ID_ACL_TRAP, /* Add new generic trap group IDs above */ __DEVLINK_TRAP_GROUP_GENERIC_ID_MAX, @@ -725,17 +788,143 @@ enum devlink_trap_group_generic_id { "ingress_flow_action_drop" #define DEVLINK_TRAP_GENERIC_NAME_EGRESS_FLOW_ACTION_DROP \ "egress_flow_action_drop" +#define DEVLINK_TRAP_GENERIC_NAME_STP \ + "stp" +#define DEVLINK_TRAP_GENERIC_NAME_LACP \ + "lacp" +#define DEVLINK_TRAP_GENERIC_NAME_LLDP \ + "lldp" +#define DEVLINK_TRAP_GENERIC_NAME_IGMP_QUERY \ + "igmp_query" +#define DEVLINK_TRAP_GENERIC_NAME_IGMP_V1_REPORT \ + "igmp_v1_report" +#define DEVLINK_TRAP_GENERIC_NAME_IGMP_V2_REPORT \ + "igmp_v2_report" +#define DEVLINK_TRAP_GENERIC_NAME_IGMP_V3_REPORT \ + "igmp_v3_report" +#define DEVLINK_TRAP_GENERIC_NAME_IGMP_V2_LEAVE \ + "igmp_v2_leave" +#define DEVLINK_TRAP_GENERIC_NAME_MLD_QUERY \ + "mld_query" +#define DEVLINK_TRAP_GENERIC_NAME_MLD_V1_REPORT \ + "mld_v1_report" +#define DEVLINK_TRAP_GENERIC_NAME_MLD_V2_REPORT \ + "mld_v2_report" +#define DEVLINK_TRAP_GENERIC_NAME_MLD_V1_DONE \ + "mld_v1_done" +#define DEVLINK_TRAP_GENERIC_NAME_IPV4_DHCP \ + "ipv4_dhcp" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_DHCP \ + "ipv6_dhcp" +#define DEVLINK_TRAP_GENERIC_NAME_ARP_REQUEST \ + "arp_request" +#define DEVLINK_TRAP_GENERIC_NAME_ARP_RESPONSE \ + "arp_response" +#define DEVLINK_TRAP_GENERIC_NAME_ARP_OVERLAY \ + "arp_overlay" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_NEIGH_SOLICIT \ + "ipv6_neigh_solicit" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_NEIGH_ADVERT \ + "ipv6_neigh_advert" +#define DEVLINK_TRAP_GENERIC_NAME_IPV4_BFD \ + "ipv4_bfd" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_BFD \ + "ipv6_bfd" +#define DEVLINK_TRAP_GENERIC_NAME_IPV4_OSPF \ + "ipv4_ospf" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_OSPF \ + "ipv6_ospf" +#define DEVLINK_TRAP_GENERIC_NAME_IPV4_BGP \ + "ipv4_bgp" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_BGP \ + "ipv6_bgp" +#define DEVLINK_TRAP_GENERIC_NAME_IPV4_VRRP \ + "ipv4_vrrp" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_VRRP \ + "ipv6_vrrp" +#define DEVLINK_TRAP_GENERIC_NAME_IPV4_PIM \ + "ipv4_pim" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_PIM \ + "ipv6_pim" +#define DEVLINK_TRAP_GENERIC_NAME_UC_LB \ + "uc_loopback" +#define DEVLINK_TRAP_GENERIC_NAME_LOCAL_ROUTE \ + "local_route" +#define DEVLINK_TRAP_GENERIC_NAME_EXTERNAL_ROUTE \ + "external_route" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_UC_DIP_LINK_LOCAL_SCOPE \ + "ipv6_uc_dip_link_local_scope" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_DIP_ALL_NODES \ + "ipv6_dip_all_nodes" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_DIP_ALL_ROUTERS \ + "ipv6_dip_all_routers" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_ROUTER_SOLICIT \ + "ipv6_router_solicit" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_ROUTER_ADVERT \ + "ipv6_router_advert" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_REDIRECT \ + "ipv6_redirect" +#define DEVLINK_TRAP_GENERIC_NAME_IPV4_ROUTER_ALERT \ + "ipv4_router_alert" +#define DEVLINK_TRAP_GENERIC_NAME_IPV6_ROUTER_ALERT \ + "ipv6_router_alert" +#define DEVLINK_TRAP_GENERIC_NAME_PTP_EVENT \ + "ptp_event" +#define DEVLINK_TRAP_GENERIC_NAME_PTP_GENERAL \ + "ptp_general" +#define DEVLINK_TRAP_GENERIC_NAME_FLOW_ACTION_SAMPLE \ + "flow_action_sample" +#define DEVLINK_TRAP_GENERIC_NAME_FLOW_ACTION_TRAP \ + "flow_action_trap" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_L2_DROPS \ "l2_drops" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_L3_DROPS \ "l3_drops" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_L3_EXCEPTIONS \ + "l3_exceptions" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_BUFFER_DROPS \ "buffer_drops" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_TUNNEL_DROPS \ "tunnel_drops" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_ACL_DROPS \ "acl_drops" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_STP \ + "stp" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_LACP \ + "lacp" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_LLDP \ + "lldp" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_MC_SNOOPING \ + "mc_snooping" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_DHCP \ + "dhcp" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_NEIGH_DISCOVERY \ + "neigh_discovery" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_BFD \ + "bfd" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_OSPF \ + "ospf" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_BGP \ + "bgp" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_VRRP \ + "vrrp" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_PIM \ + "pim" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_UC_LB \ + "uc_loopback" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_LOCAL_DELIVERY \ + "local_delivery" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_IPV6 \ + "ipv6" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_PTP_EVENT \ + "ptp_event" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_PTP_GENERAL \ + "ptp_general" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_ACL_SAMPLE \ + "acl_sample" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_ACL_TRAP \ + "acl_trap" #define DEVLINK_TRAP_GENERIC(_type, _init_action, _id, _group_id, \ _metadata_cap) \ diff --git a/include/net/espintcp.h b/include/net/espintcp.h index dd7026a00066..0335bbd76552 100644 --- a/include/net/espintcp.h +++ b/include/net/espintcp.h @@ -25,6 +25,7 @@ struct espintcp_ctx { struct espintcp_msg partial; void (*saved_data_ready)(struct sock *sk); void (*saved_write_space)(struct sock *sk); + void (*saved_destruct)(struct sock *sk); struct work_struct work; bool tx_running; }; diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h index 95d633785ef9..69e13c8b6b3a 100644 --- a/include/net/flow_offload.h +++ b/include/net/flow_offload.h @@ -443,6 +443,16 @@ enum tc_setup_type; typedef int flow_setup_cb_t(enum tc_setup_type type, void *type_data, void *cb_priv); +struct flow_block_cb; + +struct flow_block_indr { + struct list_head list; + struct net_device *dev; + enum flow_block_binder_type binder_type; + void *data; + void (*cleanup)(struct flow_block_cb *block_cb); +}; + struct flow_block_cb { struct list_head driver_list; struct list_head list; @@ -450,6 +460,7 @@ struct flow_block_cb { void *cb_ident; void *cb_priv; void (*release)(void *cb_priv); + struct flow_block_indr indr; unsigned int refcnt; }; @@ -523,19 +534,18 @@ static inline void flow_block_init(struct flow_block *flow_block) typedef int flow_indr_block_bind_cb_t(struct net_device *dev, void *cb_priv, enum tc_setup_type type, void *type_data); +int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv); +void flow_indr_dev_unregister(flow_indr_block_bind_cb_t *cb, void *cb_priv, + flow_setup_cb_t *setup_cb); +int flow_indr_dev_setup_offload(struct net_device *dev, + enum tc_setup_type type, void *data, + struct flow_block_offload *bo, + void (*cleanup)(struct flow_block_cb *block_cb)); + typedef void flow_indr_block_cmd_t(struct net_device *dev, flow_indr_block_bind_cb_t *cb, void *cb_priv, enum flow_block_command command); -struct flow_indr_block_entry { - flow_indr_block_cmd_t *cb; - struct list_head list; -}; - -void flow_indr_add_block_cb(struct flow_indr_block_entry *entry); - -void flow_indr_del_block_cb(struct flow_indr_block_entry *entry); - int __flow_indr_block_cb_register(struct net_device *dev, void *cb_priv, flow_indr_block_bind_cb_t *cb, void *cb_ident); diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h index b219a8fe0950..2ec062aaa978 100644 --- a/include/net/ip_fib.h +++ b/include/net/ip_fib.h @@ -447,6 +447,16 @@ static inline int fib_num_tclassid_users(struct net *net) #endif int fib_unmerge(struct net *net); +static inline bool nhc_l3mdev_matches_dev(const struct fib_nh_common *nhc, +const struct net_device *dev) +{ + if (nhc->nhc_dev == dev || + l3mdev_master_ifindex_rcu(nhc->nhc_dev) == dev->ifindex) + return true; + + return false; +} + /* Exported by fib_semantics.c */ int ip_fib_check_default(__be32 gw, struct net_device *dev); int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force); @@ -479,6 +489,8 @@ void fib_nh_common_release(struct fib_nh_common *nhc); void fib_alias_hw_flags_set(struct net *net, const struct fib_rt_info *fri); void fib_trie_init(void); struct fib_table *fib_trie_table(u32 id, struct fib_table *alias); +bool fib_lookup_good_nhc(const struct fib_nh_common *nhc, int fib_flags, + const struct flowi4 *flp); static inline void fib_combine_itag(u32 *itag, const struct fib_result *res) { diff --git a/include/net/mac80211.h b/include/net/mac80211.h index 0d48e679efb0..11d5610d2ad5 100644 --- a/include/net/mac80211.h +++ b/include/net/mac80211.h @@ -7,7 +7,7 @@ * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net> * Copyright 2013-2014 Intel Mobile Communications GmbH * Copyright (C) 2015 - 2017 Intel Deutschland GmbH - * Copyright (C) 2018 - 2019 Intel Corporation + * Copyright (C) 2018 - 2020 Intel Corporation */ #ifndef MAC80211_H @@ -230,7 +230,7 @@ struct ieee80211_chanctx_conf { bool radar_enabled; - u8 drv_priv[0] __aligned(sizeof(void *)); + u8 drv_priv[] __aligned(sizeof(void *)); }; /** @@ -1670,7 +1670,7 @@ struct ieee80211_vif { bool txqs_stopped[IEEE80211_NUM_ACS]; /* must be last */ - u8 drv_priv[0] __aligned(sizeof(void *)); + u8 drv_priv[] __aligned(sizeof(void *)); }; static inline bool ieee80211_vif_is_mesh(struct ieee80211_vif *vif) @@ -1798,7 +1798,7 @@ struct ieee80211_key_conf { s8 keyidx; u16 flags; u8 keylen; - u8 key[0]; + u8 key[]; }; #define IEEE80211_MAX_PN_LEN 16 @@ -1977,6 +1977,7 @@ struct ieee80211_sta_txpwr { * @ht_cap: HT capabilities of this STA; restricted to our own capabilities * @vht_cap: VHT capabilities of this STA; restricted to our own capabilities * @he_cap: HE capabilities of this STA + * @he_6ghz_capa: on 6 GHz, holds the HE 6 GHz band capabilities * @max_rx_aggregation_subframes: maximal amount of frames in a single AMPDU * that this station is allowed to transmit to us. * Can be modified by driver. @@ -2016,6 +2017,7 @@ struct ieee80211_sta { struct ieee80211_sta_ht_cap ht_cap; struct ieee80211_sta_vht_cap vht_cap; struct ieee80211_sta_he_cap he_cap; + struct ieee80211_he_6ghz_capa he_6ghz_capa; u16 max_rx_aggregation_subframes; bool wme; u8 uapsd_queues; @@ -2053,7 +2055,7 @@ struct ieee80211_sta { struct ieee80211_txq *txq[IEEE80211_NUM_TIDS + 1]; /* must be last */ - u8 drv_priv[0] __aligned(sizeof(void *)); + u8 drv_priv[] __aligned(sizeof(void *)); }; /** @@ -2099,7 +2101,7 @@ struct ieee80211_txq { u8 ac; /* must be last */ - u8 drv_priv[0] __aligned(sizeof(void *)); + u8 drv_priv[] __aligned(sizeof(void *)); }; /** diff --git a/include/net/netfilter/nf_conntrack_l4proto.h b/include/net/netfilter/nf_conntrack_l4proto.h index 4cad1f0a327a..88186b95b3c2 100644 --- a/include/net/netfilter/nf_conntrack_l4proto.h +++ b/include/net/netfilter/nf_conntrack_l4proto.h @@ -42,7 +42,8 @@ struct nf_conntrack_l4proto { /* Calculate tuple nlattr size */ unsigned int (*nlattr_tuple_size)(void); int (*nlattr_to_tuple)(struct nlattr *tb[], - struct nf_conntrack_tuple *t); + struct nf_conntrack_tuple *t, + u_int32_t flags); const struct nla_policy *nla_policy; struct { @@ -152,7 +153,8 @@ const struct nf_conntrack_l4proto *nf_ct_l4proto_find(u8 l4proto); int nf_ct_port_tuple_to_nlattr(struct sk_buff *skb, const struct nf_conntrack_tuple *tuple); int nf_ct_port_nlattr_to_tuple(struct nlattr *tb[], - struct nf_conntrack_tuple *t); + struct nf_conntrack_tuple *t, + u_int32_t flags); unsigned int nf_ct_port_nlattr_tuple_size(void); extern const struct nla_policy nf_ct_port_nla_policy[]; diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index c54a7f707e50..d7338bfd7b0f 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -175,6 +175,8 @@ void flow_offload_refresh(struct nf_flowtable *flow_table, struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table, struct flow_offload_tuple *tuple); +void nf_flow_table_gc_cleanup(struct nf_flowtable *flowtable, + struct net_device *dev); void nf_flow_table_cleanup(struct net_device *dev); int nf_flow_table_init(struct nf_flowtable *flow_table); diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index d4e29c952c40..6f0f6fca9ac3 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -1002,6 +1002,7 @@ struct nft_stats { struct nft_hook { struct list_head list; + bool inactive; struct nf_hook_ops ops; struct rcu_head rcu; }; @@ -1481,10 +1482,16 @@ struct nft_trans_obj { struct nft_trans_flowtable { struct nft_flowtable *flowtable; + bool update; + struct list_head hook_list; }; #define nft_trans_flowtable(trans) \ (((struct nft_trans_flowtable *)trans->data)->flowtable) +#define nft_trans_flowtable_update(trans) \ + (((struct nft_trans_flowtable *)trans->data)->update) +#define nft_trans_flowtable_hooks(trans) \ + (((struct nft_trans_flowtable *)trans->data)->hook_list) int __init nft_chain_filter_init(void); void nft_chain_filter_fini(void); diff --git a/include/net/nexthop.h b/include/net/nexthop.h index 4c951680f6f9..e4b55b43e907 100644 --- a/include/net/nexthop.h +++ b/include/net/nexthop.h @@ -73,6 +73,7 @@ struct nh_grp_entry { }; struct nh_group { + struct nh_group *spare; /* spare group for removals */ u16 num_nh; bool mpath; bool has_v4; @@ -152,21 +153,20 @@ static inline unsigned int nexthop_num_path(const struct nexthop *nh) { unsigned int rc = 1; - if (nexthop_is_multipath(nh)) { + if (nh->is_group) { struct nh_group *nh_grp; nh_grp = rcu_dereference_rtnl(nh->nh_grp); - rc = nh_grp->num_nh; + if (nh_grp->mpath) + rc = nh_grp->num_nh; } return rc; } static inline -struct nexthop *nexthop_mpath_select(const struct nexthop *nh, int nhsel) +struct nexthop *nexthop_mpath_select(const struct nh_group *nhg, int nhsel) { - const struct nh_group *nhg = rcu_dereference_rtnl(nh->nh_grp); - /* for_nexthops macros in fib_semantics.c grabs a pointer to * the nexthop before checking nhsel */ @@ -201,12 +201,14 @@ static inline bool nexthop_is_blackhole(const struct nexthop *nh) { const struct nh_info *nhi; - if (nexthop_is_multipath(nh)) { - if (nexthop_num_path(nh) > 1) - return false; - nh = nexthop_mpath_select(nh, 0); - if (!nh) + if (nh->is_group) { + struct nh_group *nh_grp; + + nh_grp = rcu_dereference_rtnl(nh->nh_grp); + if (nh_grp->num_nh > 1) return false; + + nh = nh_grp->nh_entries[0].nh; } nhi = rcu_dereference_rtnl(nh->nh_info); @@ -232,16 +234,79 @@ struct fib_nh_common *nexthop_fib_nhc(struct nexthop *nh, int nhsel) BUILD_BUG_ON(offsetof(struct fib_nh, nh_common) != 0); BUILD_BUG_ON(offsetof(struct fib6_nh, nh_common) != 0); - if (nexthop_is_multipath(nh)) { - nh = nexthop_mpath_select(nh, nhsel); - if (!nh) - return NULL; + if (nh->is_group) { + struct nh_group *nh_grp; + + nh_grp = rcu_dereference_rtnl(nh->nh_grp); + if (nh_grp->mpath) { + nh = nexthop_mpath_select(nh_grp, nhsel); + if (!nh) + return NULL; + } } nhi = rcu_dereference_rtnl(nh->nh_info); return &nhi->fib_nhc; } +/* called from fib_table_lookup with rcu_lock */ +static inline +struct fib_nh_common *nexthop_get_nhc_lookup(const struct nexthop *nh, + int fib_flags, + const struct flowi4 *flp, + int *nhsel) +{ + struct nh_info *nhi; + + if (nh->is_group) { + struct nh_group *nhg = rcu_dereference(nh->nh_grp); + int i; + + for (i = 0; i < nhg->num_nh; i++) { + struct nexthop *nhe = nhg->nh_entries[i].nh; + + nhi = rcu_dereference(nhe->nh_info); + if (fib_lookup_good_nhc(&nhi->fib_nhc, fib_flags, flp)) { + *nhsel = i; + return &nhi->fib_nhc; + } + } + } else { + nhi = rcu_dereference(nh->nh_info); + if (fib_lookup_good_nhc(&nhi->fib_nhc, fib_flags, flp)) { + *nhsel = 0; + return &nhi->fib_nhc; + } + } + + return NULL; +} + +static inline bool nexthop_uses_dev(const struct nexthop *nh, + const struct net_device *dev) +{ + struct nh_info *nhi; + + if (nh->is_group) { + struct nh_group *nhg = rcu_dereference(nh->nh_grp); + int i; + + for (i = 0; i < nhg->num_nh; i++) { + struct nexthop *nhe = nhg->nh_entries[i].nh; + + nhi = rcu_dereference(nhe->nh_info); + if (nhc_l3mdev_matches_dev(&nhi->fib_nhc, dev)) + return true; + } + } else { + nhi = rcu_dereference(nh->nh_info); + if (nhc_l3mdev_matches_dev(&nhi->fib_nhc, dev)) + return true; + } + + return false; +} + static inline unsigned int fib_info_num_path(const struct fib_info *fi) { if (unlikely(fi->nh)) @@ -279,8 +344,11 @@ static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh) { struct nh_info *nhi; - if (nexthop_is_multipath(nh)) { - nh = nexthop_mpath_select(nh, 0); + if (nh->is_group) { + struct nh_group *nh_grp; + + nh_grp = rcu_dereference_rtnl(nh->nh_grp); + nh = nexthop_mpath_select(nh_grp, 0); if (!nh) return NULL; } diff --git a/include/net/switchdev.h b/include/net/switchdev.h index db519957e134..b8c059b4e06d 100644 --- a/include/net/switchdev.h +++ b/include/net/switchdev.h @@ -116,6 +116,7 @@ struct switchdev_obj_mrp { struct net_device *p_port; struct net_device *s_port; u32 ring_id; + u16 prio; }; #define SWITCHDEV_OBJ_MRP(OBJ) \ @@ -129,6 +130,7 @@ struct switchdev_obj_ring_test_mrp { u8 max_miss; u32 ring_id; u32 period; + bool monitor; }; #define SWITCHDEV_OBJ_RING_TEST_MRP(OBJ) \ diff --git a/include/net/tls.h b/include/net/tls.h index cf9ec152fbb7..3e7b44cae0d9 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -135,6 +135,8 @@ struct tls_sw_context_tx { struct tls_rec *open_rec; struct list_head tx_list; atomic_t encrypt_pending; + /* protect crypto_wait with encrypt_pending */ + spinlock_t encrypt_compl_lock; int async_notify; u8 async_capable:1; @@ -155,6 +157,8 @@ struct tls_sw_context_rx { u8 async_capable:1; u8 decrypted:1; atomic_t decrypt_pending; + /* protect crypto_wait with decrypt_pending*/ + spinlock_t decrypt_compl_lock; bool async_notify; }; diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index 1b28ce1aba07..325fdaa3bb66 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -88,7 +88,7 @@ struct ib_uobject *__uobj_get_destroy(const struct uverbs_api_object *obj, static inline void uobj_put_destroy(struct ib_uobject *uobj) { - rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_DESTROY); } static inline void uobj_put_read(struct ib_uobject *uobj) diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h index 1ae90e06c06d..08563e6a424d 100644 --- a/include/uapi/linux/devlink.h +++ b/include/uapi/linux/devlink.h @@ -233,10 +233,13 @@ enum { * @DEVLINK_TRAP_ACTION_DROP: Packet is dropped by the device and a copy is not * sent to the CPU. * @DEVLINK_TRAP_ACTION_TRAP: The sole copy of the packet is sent to the CPU. + * @DEVLINK_TRAP_ACTION_MIRROR: Packet is forwarded by the device and a copy is + * sent to the CPU. */ enum devlink_trap_action { DEVLINK_TRAP_ACTION_DROP, DEVLINK_TRAP_ACTION_TRAP, + DEVLINK_TRAP_ACTION_MIRROR, }; /** @@ -250,10 +253,16 @@ enum devlink_trap_action { * control plane for resolution. Trapped packets * are processed by devlink and injected to * the kernel's Rx path. + * @DEVLINK_TRAP_TYPE_CONTROL: Packet was trapped because it is required for + * the correct functioning of the control plane. + * For example, an ARP request packet. Trapped + * packets are injected to the kernel's Rx path, + * but not reported to drop monitor. */ enum devlink_trap_type { DEVLINK_TRAP_TYPE_DROP, DEVLINK_TRAP_TYPE_EXCEPTION, + DEVLINK_TRAP_TYPE_CONTROL, }; enum { diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 5a43eb86c93b..caa6914a3e53 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -176,6 +176,7 @@ enum { IFLA_BRIDGE_MRP_INSTANCE_RING_ID, IFLA_BRIDGE_MRP_INSTANCE_P_IFINDEX, IFLA_BRIDGE_MRP_INSTANCE_S_IFINDEX, + IFLA_BRIDGE_MRP_INSTANCE_PRIO, __IFLA_BRIDGE_MRP_INSTANCE_MAX, }; @@ -221,6 +222,7 @@ enum { IFLA_BRIDGE_MRP_START_TEST_INTERVAL, IFLA_BRIDGE_MRP_START_TEST_MAX_MISS, IFLA_BRIDGE_MRP_START_TEST_PERIOD, + IFLA_BRIDGE_MRP_START_TEST_MONITOR, __IFLA_BRIDGE_MRP_START_TEST_MAX, }; @@ -230,6 +232,7 @@ struct br_mrp_instance { __u32 ring_id; __u32 p_ifindex; __u32 s_ifindex; + __u16 prio; }; struct br_mrp_ring_state { @@ -247,6 +250,7 @@ struct br_mrp_start_test { __u32 interval; __u32 max_miss; __u32 period; + __u32 monitor; }; struct bridge_stp_xstats { diff --git a/include/uapi/linux/mrp_bridge.h b/include/uapi/linux/mrp_bridge.h index 2600cdf5a284..84f15f48a7cb 100644 --- a/include/uapi/linux/mrp_bridge.h +++ b/include/uapi/linux/mrp_bridge.h @@ -11,11 +11,14 @@ #define MRP_DOMAIN_UUID_LENGTH 16 #define MRP_VERSION 1 #define MRP_FRAME_PRIO 7 +#define MRP_OUI_LENGTH 3 +#define MRP_MANUFACTURE_DATA_LENGTH 2 enum br_mrp_ring_role_type { BR_MRP_RING_ROLE_DISABLED, BR_MRP_RING_ROLE_MRC, BR_MRP_RING_ROLE_MRM, + BR_MRP_RING_ROLE_MRA, }; enum br_mrp_ring_state_type { @@ -43,6 +46,13 @@ enum br_mrp_tlv_header_type { BR_MRP_TLV_HEADER_RING_TOPO = 0x3, BR_MRP_TLV_HEADER_RING_LINK_DOWN = 0x4, BR_MRP_TLV_HEADER_RING_LINK_UP = 0x5, + BR_MRP_TLV_HEADER_OPTION = 0x7f, +}; + +enum br_mrp_sub_tlv_header_type { + BR_MRP_SUB_TLV_HEADER_TEST_MGR_NACK = 0x1, + BR_MRP_SUB_TLV_HEADER_TEST_PROPAGATE = 0x2, + BR_MRP_SUB_TLV_HEADER_TEST_AUTO_MGR = 0x3, }; struct br_mrp_tlv_hdr { @@ -50,35 +60,63 @@ struct br_mrp_tlv_hdr { __u8 length; }; +struct br_mrp_sub_tlv_hdr { + __u8 type; + __u8 length; +}; + struct br_mrp_end_hdr { struct br_mrp_tlv_hdr hdr; }; struct br_mrp_common_hdr { - __u16 seq_id; + __be16 seq_id; __u8 domain[MRP_DOMAIN_UUID_LENGTH]; }; struct br_mrp_ring_test_hdr { - __u16 prio; + __be16 prio; __u8 sa[ETH_ALEN]; - __u16 port_role; - __u16 state; - __u16 transitions; - __u32 timestamp; + __be16 port_role; + __be16 state; + __be16 transitions; + __be32 timestamp; }; struct br_mrp_ring_topo_hdr { - __u16 prio; + __be16 prio; __u8 sa[ETH_ALEN]; - __u16 interval; + __be16 interval; }; struct br_mrp_ring_link_hdr { __u8 sa[ETH_ALEN]; - __u16 port_role; - __u16 interval; - __u16 blocked; + __be16 port_role; + __be16 interval; + __be16 blocked; +}; + +struct br_mrp_sub_opt_hdr { + __u8 type; + __u8 manufacture_data[MRP_MANUFACTURE_DATA_LENGTH]; +}; + +struct br_mrp_test_mgr_nack_hdr { + __be16 prio; + __u8 sa[ETH_ALEN]; + __be16 other_prio; + __u8 other_sa[ETH_ALEN]; +}; + +struct br_mrp_test_prop_hdr { + __be16 prio; + __u8 sa[ETH_ALEN]; + __be16 other_prio; + __u8 other_sa[ETH_ALEN]; +}; + +struct br_mrp_oui_hdr { + __u8 oui[MRP_OUI_LENGTH]; }; #endif diff --git a/include/uapi/linux/netfilter/nfnetlink_conntrack.h b/include/uapi/linux/netfilter/nfnetlink_conntrack.h index 1d41810d17e2..262881792671 100644 --- a/include/uapi/linux/netfilter/nfnetlink_conntrack.h +++ b/include/uapi/linux/netfilter/nfnetlink_conntrack.h @@ -55,6 +55,7 @@ enum ctattr_type { CTA_LABELS, CTA_LABELS_MASK, CTA_SYNPROXY, + CTA_FILTER, __CTA_MAX }; #define CTA_MAX (__CTA_MAX - 1) @@ -276,4 +277,12 @@ enum ctattr_expect_stats { }; #define CTA_STATS_EXP_MAX (__CTA_STATS_EXP_MAX - 1) +enum ctattr_filter { + CTA_FILTER_UNSPEC, + CTA_FILTER_ORIG_FLAGS, + CTA_FILTER_REPLY_FLAGS, + __CTA_FILTER_MAX +}; +#define CTA_FILTER_MAX (__CTA_FILTER_MAX - 1) + #endif /* _IPCONNTRACK_NETLINK_H */ diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h index 9679d561f7d0..dad8c8f8581f 100644 --- a/include/uapi/linux/nl80211.h +++ b/include/uapi/linux/nl80211.h @@ -296,13 +296,14 @@ * to get a list of all present wiphys. * @NL80211_CMD_SET_WIPHY: set wiphy parameters, needs %NL80211_ATTR_WIPHY or * %NL80211_ATTR_IFINDEX; can be used to set %NL80211_ATTR_WIPHY_NAME, - * %NL80211_ATTR_WIPHY_TXQ_PARAMS, %NL80211_ATTR_WIPHY_FREQ (and the - * attributes determining the channel width; this is used for setting - * monitor mode channel), %NL80211_ATTR_WIPHY_RETRY_SHORT, - * %NL80211_ATTR_WIPHY_RETRY_LONG, %NL80211_ATTR_WIPHY_FRAG_THRESHOLD, - * and/or %NL80211_ATTR_WIPHY_RTS_THRESHOLD. - * However, for setting the channel, see %NL80211_CMD_SET_CHANNEL - * instead, the support here is for backward compatibility only. + * %NL80211_ATTR_WIPHY_TXQ_PARAMS, %NL80211_ATTR_WIPHY_FREQ, + * %NL80211_ATTR_WIPHY_FREQ_OFFSET (and the attributes determining the + * channel width; this is used for setting monitor mode channel), + * %NL80211_ATTR_WIPHY_RETRY_SHORT, %NL80211_ATTR_WIPHY_RETRY_LONG, + * %NL80211_ATTR_WIPHY_FRAG_THRESHOLD, and/or + * %NL80211_ATTR_WIPHY_RTS_THRESHOLD. However, for setting the channel, + * see %NL80211_CMD_SET_CHANNEL instead, the support here is for backward + * compatibility only. * @NL80211_CMD_NEW_WIPHY: Newly created wiphy, response to get request * or rename notification. Has attributes %NL80211_ATTR_WIPHY and * %NL80211_ATTR_WIPHY_NAME. @@ -351,7 +352,8 @@ * %NL80211_ATTR_AUTH_TYPE, %NL80211_ATTR_INACTIVITY_TIMEOUT, * %NL80211_ATTR_ACL_POLICY and %NL80211_ATTR_MAC_ADDRS. * The channel to use can be set on the interface or be given using the - * %NL80211_ATTR_WIPHY_FREQ and the attributes determining channel width. + * %NL80211_ATTR_WIPHY_FREQ and %NL80211_ATTR_WIPHY_FREQ_OFFSET, and the + * attributes determining channel width. * @NL80211_CMD_NEW_BEACON: old alias for %NL80211_CMD_START_AP * @NL80211_CMD_STOP_AP: Stop AP operation on the given interface * @NL80211_CMD_DEL_BEACON: old alias for %NL80211_CMD_STOP_AP @@ -536,11 +538,12 @@ * interface. %NL80211_ATTR_MAC is used to specify PeerSTAAddress (and * BSSID in case of station mode). %NL80211_ATTR_SSID is used to specify * the SSID (mainly for association, but is included in authentication - * request, too, to help BSS selection. %NL80211_ATTR_WIPHY_FREQ is used - * to specify the frequence of the channel in MHz. %NL80211_ATTR_AUTH_TYPE - * is used to specify the authentication type. %NL80211_ATTR_IE is used to - * define IEs (VendorSpecificInfo, but also including RSN IE and FT IEs) - * to be added to the frame. + * request, too, to help BSS selection. %NL80211_ATTR_WIPHY_FREQ + + * %NL80211_ATTR_WIPHY_FREQ_OFFSET is used to specify the frequence of the + * channel in MHz. %NL80211_ATTR_AUTH_TYPE is used to specify the + * authentication type. %NL80211_ATTR_IE is used to define IEs + * (VendorSpecificInfo, but also including RSN IE and FT IEs) to be added + * to the frame. * When used as an event, this reports reception of an Authentication * frame in station and IBSS modes when the local MLME processed the * frame, i.e., it was for the local STA and was received in correct @@ -595,8 +598,9 @@ * requests to connect to a specified network but without separating * auth and assoc steps. For this, you need to specify the SSID in a * %NL80211_ATTR_SSID attribute, and can optionally specify the association - * IEs in %NL80211_ATTR_IE, %NL80211_ATTR_AUTH_TYPE, %NL80211_ATTR_USE_MFP, - * %NL80211_ATTR_MAC, %NL80211_ATTR_WIPHY_FREQ, %NL80211_ATTR_CONTROL_PORT, + * IEs in %NL80211_ATTR_IE, %NL80211_ATTR_AUTH_TYPE, + * %NL80211_ATTR_USE_MFP, %NL80211_ATTR_MAC, %NL80211_ATTR_WIPHY_FREQ, + * %NL80211_ATTR_WIPHY_FREQ_OFFSET, %NL80211_ATTR_CONTROL_PORT, * %NL80211_ATTR_CONTROL_PORT_ETHERTYPE, * %NL80211_ATTR_CONTROL_PORT_NO_ENCRYPT, * %NL80211_ATTR_CONTROL_PORT_OVER_NL80211, %NL80211_ATTR_MAC_HINT, and @@ -1160,6 +1164,12 @@ * dropped because it did not include a valid MME MIC while beacon * protection was enabled (BIGTK configured in station mode). * + * @NL80211_CMD_CONTROL_PORT_FRAME_TX_STATUS: Report TX status of a control + * port frame transmitted with %NL80211_CMD_CONTROL_PORT_FRAME. + * %NL80211_ATTR_COOKIE identifies the TX command and %NL80211_ATTR_FRAME + * includes the contents of the frame. %NL80211_ATTR_ACK flag is included + * if the recipient acknowledged the frame. + * * @NL80211_CMD_MAX: highest used command number * @__NL80211_CMD_AFTER_LAST: internal use */ @@ -1388,6 +1398,8 @@ enum nl80211_commands { NL80211_CMD_UNPROT_BEACON, + NL80211_CMD_CONTROL_PORT_FRAME_TX_STATUS, + /* add new commands above here */ /* used to define NL80211_CMD_MAX below */ @@ -1433,7 +1445,8 @@ enum nl80211_commands { * of &enum nl80211_chan_width, describing the channel width. See the * documentation of the enum for more information. * @NL80211_ATTR_CENTER_FREQ1: Center frequency of the first part of the - * channel, used for anything but 20 MHz bandwidth + * channel, used for anything but 20 MHz bandwidth. In S1G this is the + * operating channel center frequency. * @NL80211_ATTR_CENTER_FREQ2: Center frequency of the second part of the * channel, used only for 80+80 MHz bandwidth * @NL80211_ATTR_WIPHY_CHANNEL_TYPE: included with NL80211_ATTR_WIPHY_FREQ @@ -2480,9 +2493,17 @@ enum nl80211_commands { * entry without having to force a disconnection after the PMK timeout. If * no roaming occurs between the reauth threshold and PMK expiration, * disassociation is still forced. - * * @NL80211_ATTR_RECEIVE_MULTICAST: multicast flag for the * %NL80211_CMD_REGISTER_FRAME command, see the description there. + * @NL80211_ATTR_WIPHY_FREQ_OFFSET: offset of the associated + * %NL80211_ATTR_WIPHY_FREQ in positive KHz. Only valid when supplied with + * an %NL80211_ATTR_WIPHY_FREQ_OFFSET. + * @NL80211_ATTR_CENTER_FREQ1_OFFSET: Center frequency offset in KHz for the + * first channel segment specified in %NL80211_ATTR_CENTER_FREQ1. + * @NL80211_ATTR_SCAN_FREQ_KHZ: nested attribute with KHz frequencies + * + * @NL80211_ATTR_HE_6GHZ_CAPABILITY: HE 6 GHz Band Capability element (from + * association request when used with NL80211_CMD_NEW_STATION). * * @NUM_NL80211_ATTR: total number of nl80211_attrs available * @NL80211_ATTR_MAX: highest attribute number currently defined @@ -2960,6 +2981,11 @@ enum nl80211_attrs { NL80211_ATTR_PMK_REAUTH_THRESHOLD, NL80211_ATTR_RECEIVE_MULTICAST, + NL80211_ATTR_WIPHY_FREQ_OFFSET, + NL80211_ATTR_CENTER_FREQ1_OFFSET, + NL80211_ATTR_SCAN_FREQ_KHZ, + + NL80211_ATTR_HE_6GHZ_CAPABILITY, /* add attributes here, update the policy in nl80211.c */ @@ -3539,6 +3565,8 @@ enum nl80211_mpath_info { * defined in HE capabilities IE * @NL80211_BAND_IFTYPE_ATTR_MAX: highest band HE capability attribute currently * defined + * @NL80211_BAND_IFTYPE_ATTR_HE_6GHZ_CAPA: HE 6GHz band capabilities (__le16), + * given for all 6 GHz band channels * @__NL80211_BAND_IFTYPE_ATTR_AFTER_LAST: internal use */ enum nl80211_band_iftype_attr { @@ -3549,6 +3577,7 @@ enum nl80211_band_iftype_attr { NL80211_BAND_IFTYPE_ATTR_HE_CAP_PHY, NL80211_BAND_IFTYPE_ATTR_HE_CAP_MCS_SET, NL80211_BAND_IFTYPE_ATTR_HE_CAP_PPE, + NL80211_BAND_IFTYPE_ATTR_HE_6GHZ_CAPA, /* keep last */ __NL80211_BAND_IFTYPE_ATTR_AFTER_LAST, @@ -3682,6 +3711,7 @@ enum nl80211_wmm_rule { * (see &enum nl80211_wmm_rule) * @NL80211_FREQUENCY_ATTR_NO_HE: HE operation is not allowed on this channel * in current regulatory domain. + * @NL80211_FREQUENCY_ATTR_OFFSET: frequency offset in KHz * @NL80211_FREQUENCY_ATTR_MAX: highest frequency attribute number * currently defined * @__NL80211_FREQUENCY_ATTR_AFTER_LAST: internal use @@ -3712,6 +3742,7 @@ enum nl80211_frequency_attr { NL80211_FREQUENCY_ATTR_NO_10MHZ, NL80211_FREQUENCY_ATTR_WMM, NL80211_FREQUENCY_ATTR_NO_HE, + NL80211_FREQUENCY_ATTR_OFFSET, /* keep last */ __NL80211_FREQUENCY_ATTR_AFTER_LAST, @@ -4482,6 +4513,7 @@ enum nl80211_bss_scan_width { * @NL80211_BSS_CHAIN_SIGNAL: per-chain signal strength of last BSS update. * Contains a nested array of signal strength attributes (u8, dBm), * using the nesting index as the antenna number. + * @NL80211_BSS_FREQUENCY_OFFSET: frequency offset in KHz * @__NL80211_BSS_AFTER_LAST: internal * @NL80211_BSS_MAX: highest BSS attribute */ @@ -4506,6 +4538,7 @@ enum nl80211_bss { NL80211_BSS_PARENT_TSF, NL80211_BSS_PARENT_BSSID, NL80211_BSS_CHAIN_SIGNAL, + NL80211_BSS_FREQUENCY_OFFSET, /* keep last */ __NL80211_BSS_AFTER_LAST, @@ -4816,6 +4849,17 @@ enum nl80211_tid_config { NL80211_TID_CONFIG_DISABLE, }; +/* enum nl80211_tx_rate_setting - TX rate configuration type + * @NL80211_TX_RATE_AUTOMATIC: automatically determine TX rate + * @NL80211_TX_RATE_LIMITED: limit the TX rate by the TX rate parameter + * @NL80211_TX_RATE_FIXED: fix TX rate to the TX rate parameter + */ +enum nl80211_tx_rate_setting { + NL80211_TX_RATE_AUTOMATIC, + NL80211_TX_RATE_LIMITED, + NL80211_TX_RATE_FIXED, +}; + /* enum nl80211_tid_config_attr - TID specific configuration. * @NL80211_TID_CONFIG_ATTR_PAD: pad attribute for 64-bit values * @NL80211_TID_CONFIG_ATTR_VIF_SUPP: a bitmap (u64) of attributes supported @@ -4823,12 +4867,10 @@ enum nl80211_tid_config { * (%NL80211_TID_CONFIG_ATTR_TIDS, %NL80211_TID_CONFIG_ATTR_OVERRIDE). * @NL80211_TID_CONFIG_ATTR_PEER_SUPP: same as the previous per-vif one, but * per peer instead. - * @NL80211_TID_CONFIG_ATTR_OVERRIDE: flag attribue, if no peer - * is selected, if set indicates that the new configuration overrides - * all previous peer configurations, otherwise previous peer specific - * configurations should be left untouched. If peer is selected then - * it will reset particular TID configuration of that peer and it will - * not accept other TID config attributes along with peer. + * @NL80211_TID_CONFIG_ATTR_OVERRIDE: flag attribue, if set indicates + * that the new configuration overrides all previous peer + * configurations, otherwise previous peer specific configurations + * should be left untouched. * @NL80211_TID_CONFIG_ATTR_TIDS: a bitmask value of TIDs (bit 0 to 7) * Its type is u16. * @NL80211_TID_CONFIG_ATTR_NOACK: Configure ack policy for the TID. @@ -4844,12 +4886,23 @@ enum nl80211_tid_config { * &NL80211_CMD_SET_TID_CONFIG. Its type is u8, min value is 1 and * the max value is advertised by the driver in this attribute on * output in wiphy capabilities. - * @NL80211_TID_CONFIG_ATTR_AMPDU_CTRL: Enable/Disable aggregation for the TIDs - * specified in %NL80211_TID_CONFIG_ATTR_TIDS. Its type is u8, using - * the values from &nl80211_tid_config. + * @NL80211_TID_CONFIG_ATTR_AMPDU_CTRL: Enable/Disable MPDU aggregation + * for the TIDs specified in %NL80211_TID_CONFIG_ATTR_TIDS. + * Its type is u8, using the values from &nl80211_tid_config. * @NL80211_TID_CONFIG_ATTR_RTSCTS_CTRL: Enable/Disable RTS_CTS for the TIDs * specified in %NL80211_TID_CONFIG_ATTR_TIDS. It is u8 type, using * the values from &nl80211_tid_config. + * @NL80211_TID_CONFIG_ATTR_AMSDU_CTRL: Enable/Disable MSDU aggregation + * for the TIDs specified in %NL80211_TID_CONFIG_ATTR_TIDS. + * Its type is u8, using the values from &nl80211_tid_config. + * @NL80211_TID_CONFIG_ATTR_TX_RATE_TYPE: This attribute will be useful + * to notfiy the driver that what type of txrate should be used + * for the TIDs specified in %NL80211_TID_CONFIG_ATTR_TIDS. using + * the values form &nl80211_tx_rate_setting. + * @NL80211_TID_CONFIG_ATTR_TX_RATE: Data frame TX rate mask should be applied + * with the parameters passed through %NL80211_ATTR_TX_RATES. + * configuration is applied to the data frame for the tid to that connected + * station. */ enum nl80211_tid_config_attr { __NL80211_TID_CONFIG_ATTR_INVALID, @@ -4863,6 +4916,9 @@ enum nl80211_tid_config_attr { NL80211_TID_CONFIG_ATTR_RETRY_LONG, NL80211_TID_CONFIG_ATTR_AMPDU_CTRL, NL80211_TID_CONFIG_ATTR_RTSCTS_CTRL, + NL80211_TID_CONFIG_ATTR_AMSDU_CTRL, + NL80211_TID_CONFIG_ATTR_TX_RATE_TYPE, + NL80211_TID_CONFIG_ATTR_TX_RATE, /* keep last */ __NL80211_TID_CONFIG_ATTR_AFTER_LAST, @@ -5340,6 +5396,8 @@ enum plink_actions { #define NL80211_KCK_LEN 16 #define NL80211_KEK_LEN 16 +#define NL80211_KCK_EXT_LEN 24 +#define NL80211_KEK_EXT_LEN 32 #define NL80211_REPLAY_CTR_LEN 8 /** @@ -5348,6 +5406,7 @@ enum plink_actions { * @NL80211_REKEY_DATA_KEK: key encryption key (binary) * @NL80211_REKEY_DATA_KCK: key confirmation key (binary) * @NL80211_REKEY_DATA_REPLAY_CTR: replay counter (binary) + * @NL80211_REKEY_DATA_AKM: AKM data (OUI, suite type) * @NUM_NL80211_REKEY_DATA: number of rekey attributes (internal) * @MAX_NL80211_REKEY_DATA: highest rekey attribute (internal) */ @@ -5356,6 +5415,7 @@ enum nl80211_rekey_data { NL80211_REKEY_DATA_KEK, NL80211_REKEY_DATA_KCK, NL80211_REKEY_DATA_REPLAY_CTR, + NL80211_REKEY_DATA_AKM, /* keep last */ NUM_NL80211_REKEY_DATA, @@ -5705,6 +5765,14 @@ enum nl80211_feature_flags { * @NL80211_EXT_FEATURE_MULTICAST_REGISTRATIONS: management frame registrations * are possible for multicast frames and those will be reported properly. * + * @NL80211_EXT_FEATURE_SCAN_FREQ_KHZ: This driver supports receiving and + * reporting scan request with %NL80211_ATTR_SCAN_FREQ_KHZ. In order to + * report %NL80211_ATTR_SCAN_FREQ_KHZ, %NL80211_SCAN_FLAG_FREQ_KHZ must be + * included in the scan request. + * + * @NL80211_EXT_FEATURE_CONTROL_PORT_OVER_NL80211_TX_STATUS: The driver + * can report tx status for control port over nl80211 tx operations. + * * @NUM_NL80211_EXT_FEATURES: number of extended features. * @MAX_NL80211_EXT_FEATURES: highest extended feature index. */ @@ -5758,6 +5826,8 @@ enum nl80211_ext_feature_index { NL80211_EXT_FEATURE_DEL_IBSS_STA, NL80211_EXT_FEATURE_MULTICAST_REGISTRATIONS, NL80211_EXT_FEATURE_BEACON_PROTECTION_CLIENT, + NL80211_EXT_FEATURE_SCAN_FREQ_KHZ, + NL80211_EXT_FEATURE_CONTROL_PORT_OVER_NL80211_TX_STATUS, /* add new features before the definition below */ NUM_NL80211_EXT_FEATURES, @@ -5869,6 +5939,9 @@ enum nl80211_timeout_reason { * @NL80211_SCAN_FLAG_MIN_PREQ_CONTENT: minimize probe request content to * only have supported rates and no additional capabilities (unless * added by userspace explicitly.) + * @NL80211_SCAN_FLAG_FREQ_KHZ: report scan results with + * %NL80211_ATTR_SCAN_FREQ_KHZ. This also means + * %NL80211_ATTR_SCAN_FREQUENCIES will not be included. */ enum nl80211_scan_flags { NL80211_SCAN_FLAG_LOW_PRIORITY = 1<<0, @@ -5884,6 +5957,7 @@ enum nl80211_scan_flags { NL80211_SCAN_FLAG_HIGH_ACCURACY = 1<<10, NL80211_SCAN_FLAG_RANDOM_SN = 1<<11, NL80211_SCAN_FLAG_MIN_PREQ_CONTENT = 1<<12, + NL80211_SCAN_FLAG_FREQ_KHZ = 1<<13, }; /** diff --git a/include/uapi/linux/wireless.h b/include/uapi/linux/wireless.h index a2c006a364e0..24f3371ad826 100644 --- a/include/uapi/linux/wireless.h +++ b/include/uapi/linux/wireless.h @@ -74,7 +74,11 @@ #include <linux/socket.h> /* for "struct sockaddr" et al */ #include <linux/if.h> /* for IFNAMSIZ and co... */ -#include <stddef.h> /* for offsetof */ +#ifdef __KERNEL__ +# include <linux/stddef.h> /* for offsetof */ +#else +# include <stddef.h> /* for offsetof */ +#endif /***************************** VERSION *****************************/ /* diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 5f3b9fec7b5f..ff7cfdc6cb44 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -304,7 +304,7 @@ enum xfrm_attr_type_t { XFRMA_PROTO, /* __u8 */ XFRMA_ADDRESS_FILTER, /* struct xfrm_address_filter */ XFRMA_PAD, - XFRMA_OFFLOAD_DEV, /* struct xfrm_state_offload */ + XFRMA_OFFLOAD_DEV, /* struct xfrm_user_offload */ XFRMA_SET_MARK, /* __u32 */ XFRMA_SET_MARK_MASK, /* __u32 */ XFRMA_IF_ID, /* __u32 */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d2e27dba4ac6..6d725a26f66e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1217,14 +1217,14 @@ static void __reg_assign_32_into_64(struct bpf_reg_state *reg) * but must be positive otherwise set to worse case bounds * and refine later from tnum. */ - if (reg->s32_min_value > 0) - reg->smin_value = reg->s32_min_value; - else - reg->smin_value = 0; - if (reg->s32_max_value > 0) + if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0) reg->smax_value = reg->s32_max_value; else reg->smax_value = U32_MAX; + if (reg->s32_min_value >= 0) + reg->smin_value = reg->s32_min_value; + else + reg->smin_value = 0; } static void __reg_combine_32_into_64(struct bpf_reg_state *reg) @@ -10522,22 +10522,13 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env) } #define SECURITY_PREFIX "security_" -static int check_attach_modify_return(struct bpf_verifier_env *env) +static int check_attach_modify_return(struct bpf_prog *prog, unsigned long addr) { - struct bpf_prog *prog = env->prog; - unsigned long addr = (unsigned long) prog->aux->trampoline->func.addr; - - /* This is expected to be cleaned up in the future with the KRSI effort - * introducing the LSM_HOOK macro for cleaning up lsm_hooks.h. - */ if (within_error_injection_list(addr) || !strncmp(SECURITY_PREFIX, prog->aux->attach_func_name, sizeof(SECURITY_PREFIX) - 1)) return 0; - verbose(env, "fmod_ret attach_btf_id %u (%s) is not modifiable\n", - prog->aux->attach_btf_id, prog->aux->attach_func_name); - return -EINVAL; } @@ -10765,11 +10756,18 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) goto out; } } + + if (prog->expected_attach_type == BPF_MODIFY_RETURN) { + ret = check_attach_modify_return(prog, addr); + if (ret) + verbose(env, "%s() is not modifiable\n", + prog->aux->attach_func_name); + } + + if (ret) + goto out; tr->func.addr = (void *)addr; prog->aux->trampoline = tr; - - if (prog->expected_attach_type == BPF_MODIFY_RETURN) - ret = check_attach_modify_return(env); out: mutex_unlock(&tr->mutex); if (ret) diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index 6f87352f8219..41ca996568df 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -33,12 +33,9 @@ void cgroup_rstat_updated(struct cgroup *cgrp, int cpu) return; /* - * Paired with the one in cgroup_rstat_cpu_pop_updated(). Either we - * see NULL updated_next or they see our updated stat. - */ - smp_mb(); - - /* + * Speculative already-on-list test. This may race leading to + * temporary inaccuracies, which is fine. + * * Because @parent's updated_children is terminated with @parent * instead of NULL, we can tell whether @cgrp is on the list by * testing the next pointer for NULL. @@ -134,13 +131,6 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos, *nextp = rstatc->updated_next; rstatc->updated_next = NULL; - /* - * Paired with the one in cgroup_rstat_cpu_updated(). - * Either they see NULL updated_next or we see their - * updated stat. - */ - smp_mb(); - return pos; } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 43172cb05980..52c82b2c94dc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2907,7 +2907,7 @@ static void task_tick_numa(struct rq *rq, struct task_struct *curr) /* * We don't care about NUMA placement if we don't have memory. */ - if (!curr->mm || (curr->flags & PF_EXITING) || work->next != work) + if ((curr->flags & (PF_EXITING | PF_KTHREAD)) || work->next != work) return; /* diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 99d77ffb79c2..cd280afb246e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1692,6 +1692,7 @@ static void collapse_file(struct mm_struct *mm, if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL)) { result = SCAN_PAGE_HAS_PRIVATE; + putback_lru_page(page); goto out_unlock; } diff --git a/mm/z3fold.c b/mm/z3fold.c index 8c3bb5e508b8..460b0feced26 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -43,6 +43,7 @@ #include <linux/spinlock.h> #include <linux/zpool.h> #include <linux/magic.h> +#include <linux/kmemleak.h> /* * NCHUNKS_ORDER determines the internal allocation granularity, effectively @@ -215,6 +216,8 @@ static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool, (gfp & ~(__GFP_HIGHMEM | __GFP_MOVABLE))); if (slots) { + /* It will be freed separately in free_handle(). */ + kmemleak_not_leak(slots); memset(slots->slot, 0, sizeof(slots->slot)); slots->pool = (unsigned long)pool; rwlock_init(&slots->lock); diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index 07c34c55fc50..307800fd18e6 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -225,8 +225,6 @@ static void hci_acl_create_connection(struct hci_conn *conn) } memcpy(conn->dev_class, ie->data.dev_class, 3); - if (ie->data.ssp_mode > 0) - set_bit(HCI_CONN_SSP_ENABLED, &conn->flags); } cp.pkt_type = cpu_to_le16(conn->pkt_type); diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 73aabca0064b..cfeaee347db3 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -2931,7 +2931,7 @@ static void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) &cp); } else { clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); - hci_encrypt_cfm(conn, ev->status, 0x00); + hci_encrypt_cfm(conn, ev->status); } } @@ -3016,22 +3016,7 @@ static void read_enc_key_size_complete(struct hci_dev *hdev, u8 status, conn->enc_key_size = rp->key_size; } - if (conn->state == BT_CONFIG) { - conn->state = BT_CONNECTED; - hci_connect_cfm(conn, 0); - hci_conn_drop(conn); - } else { - u8 encrypt; - - if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags)) - encrypt = 0x00; - else if (test_bit(HCI_CONN_AES_CCM, &conn->flags)) - encrypt = 0x02; - else - encrypt = 0x01; - - hci_encrypt_cfm(conn, 0, encrypt); - } + hci_encrypt_cfm(conn, 0); unlock: hci_dev_unlock(hdev); @@ -3149,14 +3134,7 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb) } notify: - if (conn->state == BT_CONFIG) { - if (!ev->status) - conn->state = BT_CONNECTED; - - hci_connect_cfm(conn, ev->status); - hci_conn_drop(conn); - } else - hci_encrypt_cfm(conn, ev->status, ev->encrypt); + hci_encrypt_cfm(conn, ev->status); unlock: hci_dev_unlock(hdev); @@ -4337,6 +4315,7 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, case 0x11: /* Unsupported Feature or Parameter Value */ case 0x1c: /* SCO interval rejected */ case 0x1a: /* Unsupported Remote Feature */ + case 0x1e: /* Invalid LMP Parameters */ case 0x1f: /* Unspecified error */ case 0x20: /* Unsupported LMP Parameter value */ if (conn->out) { diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c index b4eaf21360ef..df14eebe80da 100644 --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -64,15 +64,13 @@ static void rfcomm_sk_data_ready(struct rfcomm_dlc *d, struct sk_buff *skb) static void rfcomm_sk_state_change(struct rfcomm_dlc *d, int err) { struct sock *sk = d->owner, *parent; - unsigned long flags; if (!sk) return; BT_DBG("dlc %p state %ld err %d", d, d->state, err); - local_irq_save(flags); - bh_lock_sock(sk); + spin_lock_bh(&sk->sk_lock.slock); if (err) sk->sk_err = err; @@ -93,8 +91,7 @@ static void rfcomm_sk_state_change(struct rfcomm_dlc *d, int err) sk->sk_state_change(sk); } - bh_unlock_sock(sk); - local_irq_restore(flags); + spin_unlock_bh(&sk->sk_lock.slock); if (parent && sock_flag(sk, SOCK_ZAPPED)) { /* We have to drop DLC lock here, otherwise diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index 5510017cf9ff..6fd9ddb2d85c 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -730,6 +730,10 @@ static u8 check_enc_key_size(struct l2cap_conn *conn, __u8 max_key_size) struct hci_dev *hdev = conn->hcon->hdev; struct smp_chan *smp = chan->data; + if (conn->hcon->pending_sec_level == BT_SECURITY_FIPS && + max_key_size != SMP_MAX_ENC_KEY_SIZE) + return SMP_ENC_KEY_SIZE; + if (max_key_size > hdev->le_max_key_size || max_key_size < SMP_MIN_ENC_KEY_SIZE) return SMP_ENC_KEY_SIZE; diff --git a/net/bridge/br_arp_nd_proxy.c b/net/bridge/br_arp_nd_proxy.c index 37908561a64b..b18cdf03edb3 100644 --- a/net/bridge/br_arp_nd_proxy.c +++ b/net/bridge/br_arp_nd_proxy.c @@ -276,6 +276,10 @@ static void br_nd_send(struct net_bridge *br, struct net_bridge_port *p, ns_olen = request->len - (skb_network_offset(request) + sizeof(struct ipv6hdr)) - sizeof(*ns); for (i = 0; i < ns_olen - 1; i += (ns->opt[i + 1] << 3)) { + if (!ns->opt[i + 1]) { + kfree_skb(reply); + return; + } if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) { daddr = ns->opt + i + sizeof(struct nd_opt_hdr); break; diff --git a/net/bridge/br_mrp.c b/net/bridge/br_mrp.c index 8ea59504ef47..24986ec7d38c 100644 --- a/net/bridge/br_mrp.c +++ b/net/bridge/br_mrp.c @@ -147,7 +147,7 @@ static struct sk_buff *br_mrp_alloc_test_skb(struct br_mrp *mrp, br_mrp_skb_tlv(skb, BR_MRP_TLV_HEADER_RING_TEST, sizeof(*hdr)); hdr = skb_put(skb, sizeof(*hdr)); - hdr->prio = cpu_to_be16(MRP_DEFAULT_PRIO); + hdr->prio = cpu_to_be16(mrp->prio); ether_addr_copy(hdr->sa, p->br->dev->dev_addr); hdr->port_role = cpu_to_be16(port_role); hdr->state = cpu_to_be16(mrp->ring_state); @@ -160,6 +160,16 @@ static struct sk_buff *br_mrp_alloc_test_skb(struct br_mrp *mrp, return skb; } +/* This function is continuously called in the following cases: + * - when node role is MRM, in this case test_monitor is always set to false + * because it needs to notify the userspace that the ring is open and needs to + * send MRP_Test frames + * - when node role is MRA, there are 2 subcases: + * - when MRA behaves as MRM, in this case is similar with MRM role + * - when MRA behaves as MRC, in this case test_monitor is set to true, + * because it needs to detect when it stops seeing MRP_Test frames + * from MRM node but it doesn't need to send MRP_Test frames. + */ static void br_mrp_test_work_expired(struct work_struct *work) { struct delayed_work *del_work = to_delayed_work(work); @@ -177,8 +187,14 @@ static void br_mrp_test_work_expired(struct work_struct *work) /* Notify that the ring is open only if the ring state is * closed, otherwise it would continue to notify at every * interval. + * Also notify that the ring is open when the node has the + * role MRA and behaves as MRC. The reason is that the + * userspace needs to know when the MRM stopped sending + * MRP_Test frames so that the current node to try to take + * the role of a MRM. */ - if (mrp->ring_state == BR_MRP_RING_STATE_CLOSED) + if (mrp->ring_state == BR_MRP_RING_STATE_CLOSED || + mrp->test_monitor) notify_open = true; } @@ -186,12 +202,15 @@ static void br_mrp_test_work_expired(struct work_struct *work) p = rcu_dereference(mrp->p_port); if (p) { - skb = br_mrp_alloc_test_skb(mrp, p, BR_MRP_PORT_ROLE_PRIMARY); - if (!skb) - goto out; - - skb_reset_network_header(skb); - dev_queue_xmit(skb); + if (!mrp->test_monitor) { + skb = br_mrp_alloc_test_skb(mrp, p, + BR_MRP_PORT_ROLE_PRIMARY); + if (!skb) + goto out; + + skb_reset_network_header(skb); + dev_queue_xmit(skb); + } if (notify_open && !mrp->ring_role_offloaded) br_mrp_port_open(p->dev, true); @@ -199,12 +218,15 @@ static void br_mrp_test_work_expired(struct work_struct *work) p = rcu_dereference(mrp->s_port); if (p) { - skb = br_mrp_alloc_test_skb(mrp, p, BR_MRP_PORT_ROLE_SECONDARY); - if (!skb) - goto out; - - skb_reset_network_header(skb); - dev_queue_xmit(skb); + if (!mrp->test_monitor) { + skb = br_mrp_alloc_test_skb(mrp, p, + BR_MRP_PORT_ROLE_SECONDARY); + if (!skb) + goto out; + + skb_reset_network_header(skb); + dev_queue_xmit(skb); + } if (notify_open && !mrp->ring_role_offloaded) br_mrp_port_open(p->dev, true); @@ -227,7 +249,7 @@ static void br_mrp_del_impl(struct net_bridge *br, struct br_mrp *mrp) /* Stop sending MRP_Test frames */ cancel_delayed_work_sync(&mrp->test_work); - br_mrp_switchdev_send_ring_test(br, mrp, 0, 0, 0); + br_mrp_switchdev_send_ring_test(br, mrp, 0, 0, 0, 0); br_mrp_switchdev_del(br, mrp); @@ -290,6 +312,7 @@ int br_mrp_add(struct net_bridge *br, struct br_mrp_instance *instance) return -ENOMEM; mrp->ring_id = instance->ring_id; + mrp->prio = instance->prio; p = br_mrp_get_port(br, instance->p_ifindex); spin_lock_bh(&br->lock); @@ -451,8 +474,8 @@ int br_mrp_set_ring_role(struct net_bridge *br, return 0; } -/* Start to generate MRP test frames, the frames are generated by HW and if it - * fails, they are generated by the SW. +/* Start to generate or monitor MRP test frames, the frames are generated by + * HW and if it fails, they are generated by the SW. * note: already called with rtnl_lock */ int br_mrp_start_test(struct net_bridge *br, @@ -463,16 +486,18 @@ int br_mrp_start_test(struct net_bridge *br, if (!mrp) return -EINVAL; - /* Try to push it to the HW and if it fails then continue to generate in - * SW and if that also fails then return error + /* Try to push it to the HW and if it fails then continue with SW + * implementation and if that also fails then return error. */ if (!br_mrp_switchdev_send_ring_test(br, mrp, test->interval, - test->max_miss, test->period)) + test->max_miss, test->period, + test->monitor)) return 0; mrp->test_interval = test->interval; mrp->test_end = jiffies + usecs_to_jiffies(test->period); mrp->test_max_miss = test->max_miss; + mrp->test_monitor = test->monitor; mrp->test_count_miss = 0; queue_delayed_work(system_wq, &mrp->test_work, usecs_to_jiffies(test->interval)); @@ -509,6 +534,57 @@ static void br_mrp_mrm_process(struct br_mrp *mrp, struct net_bridge_port *port, br_mrp_port_open(port->dev, false); } +/* Determin if the test hdr has a better priority than the node */ +static bool br_mrp_test_better_than_own(struct br_mrp *mrp, + struct net_bridge *br, + const struct br_mrp_ring_test_hdr *hdr) +{ + u16 prio = be16_to_cpu(hdr->prio); + + if (prio < mrp->prio || + (prio == mrp->prio && + ether_addr_to_u64(hdr->sa) < ether_addr_to_u64(br->dev->dev_addr))) + return true; + + return false; +} + +/* Process only MRP Test frame. All the other MRP frames are processed by + * userspace application + * note: already called with rcu_read_lock + */ +static void br_mrp_mra_process(struct br_mrp *mrp, struct net_bridge *br, + struct net_bridge_port *port, + struct sk_buff *skb) +{ + const struct br_mrp_ring_test_hdr *test_hdr; + struct br_mrp_ring_test_hdr _test_hdr; + const struct br_mrp_tlv_hdr *hdr; + struct br_mrp_tlv_hdr _hdr; + + /* Each MRP header starts with a version field which is 16 bits. + * Therefore skip the version and get directly the TLV header. + */ + hdr = skb_header_pointer(skb, sizeof(uint16_t), sizeof(_hdr), &_hdr); + if (!hdr) + return; + + if (hdr->type != BR_MRP_TLV_HEADER_RING_TEST) + return; + + test_hdr = skb_header_pointer(skb, sizeof(uint16_t) + sizeof(_hdr), + sizeof(_test_hdr), &_test_hdr); + if (!test_hdr) + return; + + /* Only frames that have a better priority than the node will + * clear the miss counter because otherwise the node will need to behave + * as MRM. + */ + if (br_mrp_test_better_than_own(mrp, br, test_hdr)) + mrp->test_count_miss = 0; +} + /* This will just forward the frame to the other mrp ring port(MRC role) or will * not do anything. * note: already called with rcu_read_lock @@ -545,6 +621,18 @@ static int br_mrp_rcv(struct net_bridge_port *p, return 1; } + /* If the role is MRA then don't forward the frames if it behaves as + * MRM node + */ + if (mrp->ring_role == BR_MRP_RING_ROLE_MRA) { + if (!mrp->test_monitor) { + br_mrp_mrm_process(mrp, p, skb); + return 1; + } + + br_mrp_mra_process(mrp, br, p, skb); + } + /* Clone the frame and forward it on the other MRP port */ nskb = skb_clone(skb, GFP_ATOMIC); if (!nskb) diff --git a/net/bridge/br_mrp_netlink.c b/net/bridge/br_mrp_netlink.c index d9de780d2ce0..34b3a8776991 100644 --- a/net/bridge/br_mrp_netlink.c +++ b/net/bridge/br_mrp_netlink.c @@ -22,6 +22,7 @@ br_mrp_instance_policy[IFLA_BRIDGE_MRP_INSTANCE_MAX + 1] = { [IFLA_BRIDGE_MRP_INSTANCE_RING_ID] = { .type = NLA_U32 }, [IFLA_BRIDGE_MRP_INSTANCE_P_IFINDEX] = { .type = NLA_U32 }, [IFLA_BRIDGE_MRP_INSTANCE_S_IFINDEX] = { .type = NLA_U32 }, + [IFLA_BRIDGE_MRP_INSTANCE_PRIO] = { .type = NLA_U16 }, }; static int br_mrp_instance_parse(struct net_bridge *br, struct nlattr *attr, @@ -49,6 +50,10 @@ static int br_mrp_instance_parse(struct net_bridge *br, struct nlattr *attr, inst.ring_id = nla_get_u32(tb[IFLA_BRIDGE_MRP_INSTANCE_RING_ID]); inst.p_ifindex = nla_get_u32(tb[IFLA_BRIDGE_MRP_INSTANCE_P_IFINDEX]); inst.s_ifindex = nla_get_u32(tb[IFLA_BRIDGE_MRP_INSTANCE_S_IFINDEX]); + inst.prio = MRP_DEFAULT_PRIO; + + if (tb[IFLA_BRIDGE_MRP_INSTANCE_PRIO]) + inst.prio = nla_get_u16(tb[IFLA_BRIDGE_MRP_INSTANCE_PRIO]); if (cmd == RTM_SETLINK) return br_mrp_add(br, &inst); @@ -191,6 +196,7 @@ br_mrp_start_test_policy[IFLA_BRIDGE_MRP_START_TEST_MAX + 1] = { [IFLA_BRIDGE_MRP_START_TEST_INTERVAL] = { .type = NLA_U32 }, [IFLA_BRIDGE_MRP_START_TEST_MAX_MISS] = { .type = NLA_U32 }, [IFLA_BRIDGE_MRP_START_TEST_PERIOD] = { .type = NLA_U32 }, + [IFLA_BRIDGE_MRP_START_TEST_MONITOR] = { .type = NLA_U32 }, }; static int br_mrp_start_test_parse(struct net_bridge *br, struct nlattr *attr, @@ -220,6 +226,11 @@ static int br_mrp_start_test_parse(struct net_bridge *br, struct nlattr *attr, test.interval = nla_get_u32(tb[IFLA_BRIDGE_MRP_START_TEST_INTERVAL]); test.max_miss = nla_get_u32(tb[IFLA_BRIDGE_MRP_START_TEST_MAX_MISS]); test.period = nla_get_u32(tb[IFLA_BRIDGE_MRP_START_TEST_PERIOD]); + test.monitor = false; + + if (tb[IFLA_BRIDGE_MRP_START_TEST_MONITOR]) + test.monitor = + nla_get_u32(tb[IFLA_BRIDGE_MRP_START_TEST_MONITOR]); return br_mrp_start_test(br, &test); } diff --git a/net/bridge/br_mrp_switchdev.c b/net/bridge/br_mrp_switchdev.c index 51cb1d5a24b4..0da68a0da4b5 100644 --- a/net/bridge/br_mrp_switchdev.c +++ b/net/bridge/br_mrp_switchdev.c @@ -12,6 +12,7 @@ int br_mrp_switchdev_add(struct net_bridge *br, struct br_mrp *mrp) .p_port = rtnl_dereference(mrp->p_port)->dev, .s_port = rtnl_dereference(mrp->s_port)->dev, .ring_id = mrp->ring_id, + .prio = mrp->prio, }; int err; @@ -64,7 +65,8 @@ int br_mrp_switchdev_set_ring_role(struct net_bridge *br, int br_mrp_switchdev_send_ring_test(struct net_bridge *br, struct br_mrp *mrp, u32 interval, - u8 max_miss, u32 period) + u8 max_miss, u32 period, + bool monitor) { struct switchdev_obj_ring_test_mrp test = { .obj.orig_dev = br->dev, @@ -73,6 +75,7 @@ int br_mrp_switchdev_send_ring_test(struct net_bridge *br, .max_miss = max_miss, .ring_id = mrp->ring_id, .period = period, + .monitor = monitor, }; int err; diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index ad12fe3fca8c..83490bf73a13 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -2413,7 +2413,8 @@ void br_multicast_uninit_stats(struct net_bridge *br) free_percpu(br->mcast_stats); } -static void mcast_stats_add_dir(u64 *dst, u64 *src) +/* noinline for https://bugs.llvm.org/show_bug.cgi?id=45802#c9 */ +static noinline_for_stack void mcast_stats_add_dir(u64 *dst, u64 *src) { dst[BR_MCAST_DIR_RX] += src[BR_MCAST_DIR_RX]; dst[BR_MCAST_DIR_TX] += src[BR_MCAST_DIR_TX]; diff --git a/net/bridge/br_private_mrp.h b/net/bridge/br_private_mrp.h index a0f53cc3ab85..33b255e38ffe 100644 --- a/net/bridge/br_private_mrp.h +++ b/net/bridge/br_private_mrp.h @@ -14,6 +14,7 @@ struct br_mrp { struct net_bridge_port __rcu *s_port; u32 ring_id; + u16 prio; enum br_mrp_ring_role_type ring_role; u8 ring_role_offloaded; @@ -25,6 +26,7 @@ struct br_mrp { unsigned long test_end; u32 test_count_miss; u32 test_max_miss; + bool test_monitor; u32 seq_id; @@ -51,7 +53,8 @@ int br_mrp_switchdev_set_ring_role(struct net_bridge *br, struct br_mrp *mrp, int br_mrp_switchdev_set_ring_state(struct net_bridge *br, struct br_mrp *mrp, enum br_mrp_ring_state_type state); int br_mrp_switchdev_send_ring_test(struct net_bridge *br, struct br_mrp *mrp, - u32 interval, u8 max_miss, u32 period); + u32 interval, u8 max_miss, u32 period, + bool monitor); int br_mrp_port_switchdev_set_state(struct net_bridge_port *p, enum br_mrp_port_state_type state); int br_mrp_port_switchdev_set_role(struct net_bridge_port *p, diff --git a/net/bridge/netfilter/nft_reject_bridge.c b/net/bridge/netfilter/nft_reject_bridge.c index b325b569e761..f48cf4cfb80f 100644 --- a/net/bridge/netfilter/nft_reject_bridge.c +++ b/net/bridge/netfilter/nft_reject_bridge.c @@ -31,6 +31,12 @@ static void nft_reject_br_push_etherhdr(struct sk_buff *oldskb, ether_addr_copy(eth->h_dest, eth_hdr(oldskb)->h_source); eth->h_proto = eth_hdr(oldskb)->h_proto; skb_pull(nskb, ETH_HLEN); + + if (skb_vlan_tag_present(oldskb)) { + u16 vid = skb_vlan_tag_get(oldskb); + + __vlan_hwaccel_put_tag(nskb, oldskb->vlan_proto, vid); + } } static int nft_bridge_iphdr_validate(struct sk_buff *skb) diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 998e26b75a78..1d4973f8cd7a 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -3649,7 +3649,9 @@ static void handle_reply(struct ceph_osd *osd, struct ceph_msg *msg) * supported. */ req->r_t.target_oloc.pool = m.redirect.oloc.pool; - req->r_flags |= CEPH_OSD_FLAG_REDIRECTED; + req->r_flags |= CEPH_OSD_FLAG_REDIRECTED | + CEPH_OSD_FLAG_IGNORE_OVERLAY | + CEPH_OSD_FLAG_IGNORE_CACHE; req->r_tid = 0; __submit_request(req, false); goto out_unlock_osdc; diff --git a/net/compat.c b/net/compat.c index afd7b444e0bf..5e3041a2c37d 100644 --- a/net/compat.c +++ b/net/compat.c @@ -183,20 +183,21 @@ int cmsghdr_from_user_compat_to_kern(struct msghdr *kmsg, struct sock *sk, memset(kcmsg, 0, kcmlen); ucmsg = CMSG_COMPAT_FIRSTHDR(kmsg); while (ucmsg != NULL) { - if (__get_user(ucmlen, &ucmsg->cmsg_len)) + struct compat_cmsghdr cmsg; + if (copy_from_user(&cmsg, ucmsg, sizeof(cmsg))) goto Efault; - if (!CMSG_COMPAT_OK(ucmlen, ucmsg, kmsg)) + if (!CMSG_COMPAT_OK(cmsg.cmsg_len, ucmsg, kmsg)) goto Einval; - tmp = ((ucmlen - sizeof(*ucmsg)) + sizeof(struct cmsghdr)); + tmp = ((cmsg.cmsg_len - sizeof(*ucmsg)) + sizeof(struct cmsghdr)); if ((char *)kcmsg_base + kcmlen - (char *)kcmsg < CMSG_ALIGN(tmp)) goto Einval; kcmsg->cmsg_len = tmp; + kcmsg->cmsg_level = cmsg.cmsg_level; + kcmsg->cmsg_type = cmsg.cmsg_type; tmp = CMSG_ALIGN(tmp); - if (__get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level) || - __get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type) || - copy_from_user(CMSG_DATA(kcmsg), + if (copy_from_user(CMSG_DATA(kcmsg), CMSG_COMPAT_DATA(ucmsg), - (ucmlen - sizeof(*ucmsg)))) + (cmsg.cmsg_len - sizeof(*ucmsg)))) goto Efault; /* Advance. */ diff --git a/net/core/devlink.c b/net/core/devlink.c index 7b76e5fffc10..2cafbc808b09 100644 --- a/net/core/devlink.c +++ b/net/core/devlink.c @@ -5869,7 +5869,8 @@ devlink_trap_action_get_from_info(struct genl_info *info, val = nla_get_u8(info->attrs[DEVLINK_ATTR_TRAP_ACTION]); switch (val) { case DEVLINK_TRAP_ACTION_DROP: /* fall-through */ - case DEVLINK_TRAP_ACTION_TRAP: + case DEVLINK_TRAP_ACTION_TRAP: /* fall-through */ + case DEVLINK_TRAP_ACTION_MIRROR: *p_trap_action = val; break; default: @@ -8494,6 +8495,50 @@ static const struct devlink_trap devlink_trap_generic[] = { DEVLINK_TRAP(OVERLAY_SMAC_MC, DROP), DEVLINK_TRAP(INGRESS_FLOW_ACTION_DROP, DROP), DEVLINK_TRAP(EGRESS_FLOW_ACTION_DROP, DROP), + DEVLINK_TRAP(STP, CONTROL), + DEVLINK_TRAP(LACP, CONTROL), + DEVLINK_TRAP(LLDP, CONTROL), + DEVLINK_TRAP(IGMP_QUERY, CONTROL), + DEVLINK_TRAP(IGMP_V1_REPORT, CONTROL), + DEVLINK_TRAP(IGMP_V2_REPORT, CONTROL), + DEVLINK_TRAP(IGMP_V3_REPORT, CONTROL), + DEVLINK_TRAP(IGMP_V2_LEAVE, CONTROL), + DEVLINK_TRAP(MLD_QUERY, CONTROL), + DEVLINK_TRAP(MLD_V1_REPORT, CONTROL), + DEVLINK_TRAP(MLD_V2_REPORT, CONTROL), + DEVLINK_TRAP(MLD_V1_DONE, CONTROL), + DEVLINK_TRAP(IPV4_DHCP, CONTROL), + DEVLINK_TRAP(IPV6_DHCP, CONTROL), + DEVLINK_TRAP(ARP_REQUEST, CONTROL), + DEVLINK_TRAP(ARP_RESPONSE, CONTROL), + DEVLINK_TRAP(ARP_OVERLAY, CONTROL), + DEVLINK_TRAP(IPV6_NEIGH_SOLICIT, CONTROL), + DEVLINK_TRAP(IPV6_NEIGH_ADVERT, CONTROL), + DEVLINK_TRAP(IPV4_BFD, CONTROL), + DEVLINK_TRAP(IPV6_BFD, CONTROL), + DEVLINK_TRAP(IPV4_OSPF, CONTROL), + DEVLINK_TRAP(IPV6_OSPF, CONTROL), + DEVLINK_TRAP(IPV4_BGP, CONTROL), + DEVLINK_TRAP(IPV6_BGP, CONTROL), + DEVLINK_TRAP(IPV4_VRRP, CONTROL), + DEVLINK_TRAP(IPV6_VRRP, CONTROL), + DEVLINK_TRAP(IPV4_PIM, CONTROL), + DEVLINK_TRAP(IPV6_PIM, CONTROL), + DEVLINK_TRAP(UC_LB, CONTROL), + DEVLINK_TRAP(LOCAL_ROUTE, CONTROL), + DEVLINK_TRAP(EXTERNAL_ROUTE, CONTROL), + DEVLINK_TRAP(IPV6_UC_DIP_LINK_LOCAL_SCOPE, CONTROL), + DEVLINK_TRAP(IPV6_DIP_ALL_NODES, CONTROL), + DEVLINK_TRAP(IPV6_DIP_ALL_ROUTERS, CONTROL), + DEVLINK_TRAP(IPV6_ROUTER_SOLICIT, CONTROL), + DEVLINK_TRAP(IPV6_ROUTER_ADVERT, CONTROL), + DEVLINK_TRAP(IPV6_REDIRECT, CONTROL), + DEVLINK_TRAP(IPV4_ROUTER_ALERT, CONTROL), + DEVLINK_TRAP(IPV6_ROUTER_ALERT, CONTROL), + DEVLINK_TRAP(PTP_EVENT, CONTROL), + DEVLINK_TRAP(PTP_GENERAL, CONTROL), + DEVLINK_TRAP(FLOW_ACTION_SAMPLE, CONTROL), + DEVLINK_TRAP(FLOW_ACTION_TRAP, CONTROL), }; #define DEVLINK_TRAP_GROUP(_id) \ @@ -8505,9 +8550,28 @@ static const struct devlink_trap devlink_trap_generic[] = { static const struct devlink_trap_group devlink_trap_group_generic[] = { DEVLINK_TRAP_GROUP(L2_DROPS), DEVLINK_TRAP_GROUP(L3_DROPS), + DEVLINK_TRAP_GROUP(L3_EXCEPTIONS), DEVLINK_TRAP_GROUP(BUFFER_DROPS), DEVLINK_TRAP_GROUP(TUNNEL_DROPS), DEVLINK_TRAP_GROUP(ACL_DROPS), + DEVLINK_TRAP_GROUP(STP), + DEVLINK_TRAP_GROUP(LACP), + DEVLINK_TRAP_GROUP(LLDP), + DEVLINK_TRAP_GROUP(MC_SNOOPING), + DEVLINK_TRAP_GROUP(DHCP), + DEVLINK_TRAP_GROUP(NEIGH_DISCOVERY), + DEVLINK_TRAP_GROUP(BFD), + DEVLINK_TRAP_GROUP(OSPF), + DEVLINK_TRAP_GROUP(BGP), + DEVLINK_TRAP_GROUP(VRRP), + DEVLINK_TRAP_GROUP(PIM), + DEVLINK_TRAP_GROUP(UC_LB), + DEVLINK_TRAP_GROUP(LOCAL_DELIVERY), + DEVLINK_TRAP_GROUP(IPV6), + DEVLINK_TRAP_GROUP(PTP_EVENT), + DEVLINK_TRAP_GROUP(PTP_GENERAL), + DEVLINK_TRAP_GROUP(ACL_SAMPLE), + DEVLINK_TRAP_GROUP(ACL_TRAP), }; static int devlink_trap_generic_verify(const struct devlink_trap *trap) @@ -8845,6 +8909,13 @@ void devlink_trap_report(struct devlink *devlink, struct sk_buff *skb, devlink_trap_stats_update(trap_item->stats, skb->len); devlink_trap_stats_update(trap_item->group_item->stats, skb->len); + /* Control packets were not dropped by the device or encountered an + * exception during forwarding and therefore should not be reported to + * the kernel's drop monitor. + */ + if (trap_item->trap->type == DEVLINK_TRAP_TYPE_CONTROL) + return; + devlink_trap_report_metadata_fill(&hw_metadata, trap_item, in_devlink_port, fa_cookie); net_dm_hw_report(skb, &hw_metadata); diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c index e64941c526b1..0cfc35e6be28 100644 --- a/net/core/flow_offload.c +++ b/net/core/flow_offload.c @@ -317,240 +317,159 @@ int flow_block_cb_setup_simple(struct flow_block_offload *f, } EXPORT_SYMBOL(flow_block_cb_setup_simple); -static LIST_HEAD(block_cb_list); - -static struct rhashtable indr_setup_block_ht; - -struct flow_indr_block_cb { - struct list_head list; - void *cb_priv; - flow_indr_block_bind_cb_t *cb; - void *cb_ident; -}; - -struct flow_indr_block_dev { - struct rhash_head ht_node; - struct net_device *dev; - unsigned int refcnt; - struct list_head cb_list; +static DEFINE_MUTEX(flow_indr_block_lock); +static LIST_HEAD(flow_block_indr_list); +static LIST_HEAD(flow_block_indr_dev_list); + +struct flow_indr_dev { + struct list_head list; + flow_indr_block_bind_cb_t *cb; + void *cb_priv; + refcount_t refcnt; + struct rcu_head rcu; }; -static const struct rhashtable_params flow_indr_setup_block_ht_params = { - .key_offset = offsetof(struct flow_indr_block_dev, dev), - .head_offset = offsetof(struct flow_indr_block_dev, ht_node), - .key_len = sizeof(struct net_device *), -}; - -static struct flow_indr_block_dev * -flow_indr_block_dev_lookup(struct net_device *dev) +static struct flow_indr_dev *flow_indr_dev_alloc(flow_indr_block_bind_cb_t *cb, + void *cb_priv) { - return rhashtable_lookup_fast(&indr_setup_block_ht, &dev, - flow_indr_setup_block_ht_params); -} + struct flow_indr_dev *indr_dev; -static struct flow_indr_block_dev * -flow_indr_block_dev_get(struct net_device *dev) -{ - struct flow_indr_block_dev *indr_dev; - - indr_dev = flow_indr_block_dev_lookup(dev); - if (indr_dev) - goto inc_ref; - - indr_dev = kzalloc(sizeof(*indr_dev), GFP_KERNEL); + indr_dev = kmalloc(sizeof(*indr_dev), GFP_KERNEL); if (!indr_dev) return NULL; - INIT_LIST_HEAD(&indr_dev->cb_list); - indr_dev->dev = dev; - if (rhashtable_insert_fast(&indr_setup_block_ht, &indr_dev->ht_node, - flow_indr_setup_block_ht_params)) { - kfree(indr_dev); - return NULL; - } + indr_dev->cb = cb; + indr_dev->cb_priv = cb_priv; + refcount_set(&indr_dev->refcnt, 1); -inc_ref: - indr_dev->refcnt++; return indr_dev; } -static void flow_indr_block_dev_put(struct flow_indr_block_dev *indr_dev) -{ - if (--indr_dev->refcnt) - return; - - rhashtable_remove_fast(&indr_setup_block_ht, &indr_dev->ht_node, - flow_indr_setup_block_ht_params); - kfree(indr_dev); -} - -static struct flow_indr_block_cb * -flow_indr_block_cb_lookup(struct flow_indr_block_dev *indr_dev, - flow_indr_block_bind_cb_t *cb, void *cb_ident) -{ - struct flow_indr_block_cb *indr_block_cb; - - list_for_each_entry(indr_block_cb, &indr_dev->cb_list, list) - if (indr_block_cb->cb == cb && - indr_block_cb->cb_ident == cb_ident) - return indr_block_cb; - return NULL; -} - -static struct flow_indr_block_cb * -flow_indr_block_cb_add(struct flow_indr_block_dev *indr_dev, void *cb_priv, - flow_indr_block_bind_cb_t *cb, void *cb_ident) +int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv) { - struct flow_indr_block_cb *indr_block_cb; + struct flow_indr_dev *indr_dev; - indr_block_cb = flow_indr_block_cb_lookup(indr_dev, cb, cb_ident); - if (indr_block_cb) - return ERR_PTR(-EEXIST); - - indr_block_cb = kzalloc(sizeof(*indr_block_cb), GFP_KERNEL); - if (!indr_block_cb) - return ERR_PTR(-ENOMEM); + mutex_lock(&flow_indr_block_lock); + list_for_each_entry(indr_dev, &flow_block_indr_dev_list, list) { + if (indr_dev->cb == cb && + indr_dev->cb_priv == cb_priv) { + refcount_inc(&indr_dev->refcnt); + mutex_unlock(&flow_indr_block_lock); + return 0; + } + } - indr_block_cb->cb_priv = cb_priv; - indr_block_cb->cb = cb; - indr_block_cb->cb_ident = cb_ident; - list_add(&indr_block_cb->list, &indr_dev->cb_list); + indr_dev = flow_indr_dev_alloc(cb, cb_priv); + if (!indr_dev) { + mutex_unlock(&flow_indr_block_lock); + return -ENOMEM; + } - return indr_block_cb; -} + list_add(&indr_dev->list, &flow_block_indr_dev_list); + mutex_unlock(&flow_indr_block_lock); -static void flow_indr_block_cb_del(struct flow_indr_block_cb *indr_block_cb) -{ - list_del(&indr_block_cb->list); - kfree(indr_block_cb); + return 0; } +EXPORT_SYMBOL(flow_indr_dev_register); -static DEFINE_MUTEX(flow_indr_block_cb_lock); - -static void flow_block_cmd(struct net_device *dev, - flow_indr_block_bind_cb_t *cb, void *cb_priv, - enum flow_block_command command) +static void __flow_block_indr_cleanup(flow_setup_cb_t *setup_cb, void *cb_priv, + struct list_head *cleanup_list) { - struct flow_indr_block_entry *entry; + struct flow_block_cb *this, *next; - mutex_lock(&flow_indr_block_cb_lock); - list_for_each_entry(entry, &block_cb_list, list) { - entry->cb(dev, cb, cb_priv, command); + list_for_each_entry_safe(this, next, &flow_block_indr_list, indr.list) { + if (this->cb == setup_cb && + this->cb_priv == cb_priv) { + list_move(&this->indr.list, cleanup_list); + return; + } } - mutex_unlock(&flow_indr_block_cb_lock); } -int __flow_indr_block_cb_register(struct net_device *dev, void *cb_priv, - flow_indr_block_bind_cb_t *cb, - void *cb_ident) +static void flow_block_indr_notify(struct list_head *cleanup_list) { - struct flow_indr_block_cb *indr_block_cb; - struct flow_indr_block_dev *indr_dev; - int err; - - indr_dev = flow_indr_block_dev_get(dev); - if (!indr_dev) - return -ENOMEM; - - indr_block_cb = flow_indr_block_cb_add(indr_dev, cb_priv, cb, cb_ident); - err = PTR_ERR_OR_ZERO(indr_block_cb); - if (err) - goto err_dev_put; + struct flow_block_cb *this, *next; - flow_block_cmd(dev, indr_block_cb->cb, indr_block_cb->cb_priv, - FLOW_BLOCK_BIND); - - return 0; - -err_dev_put: - flow_indr_block_dev_put(indr_dev); - return err; -} -EXPORT_SYMBOL_GPL(__flow_indr_block_cb_register); - -int flow_indr_block_cb_register(struct net_device *dev, void *cb_priv, - flow_indr_block_bind_cb_t *cb, - void *cb_ident) -{ - int err; - - rtnl_lock(); - err = __flow_indr_block_cb_register(dev, cb_priv, cb, cb_ident); - rtnl_unlock(); - - return err; + list_for_each_entry_safe(this, next, cleanup_list, indr.list) { + list_del(&this->indr.list); + this->indr.cleanup(this); + } } -EXPORT_SYMBOL_GPL(flow_indr_block_cb_register); -void __flow_indr_block_cb_unregister(struct net_device *dev, - flow_indr_block_bind_cb_t *cb, - void *cb_ident) +void flow_indr_dev_unregister(flow_indr_block_bind_cb_t *cb, void *cb_priv, + flow_setup_cb_t *setup_cb) { - struct flow_indr_block_cb *indr_block_cb; - struct flow_indr_block_dev *indr_dev; + struct flow_indr_dev *this, *next, *indr_dev = NULL; + LIST_HEAD(cleanup_list); - indr_dev = flow_indr_block_dev_lookup(dev); - if (!indr_dev) - return; + mutex_lock(&flow_indr_block_lock); + list_for_each_entry_safe(this, next, &flow_block_indr_dev_list, list) { + if (this->cb == cb && + this->cb_priv == cb_priv && + refcount_dec_and_test(&this->refcnt)) { + indr_dev = this; + list_del(&indr_dev->list); + break; + } + } - indr_block_cb = flow_indr_block_cb_lookup(indr_dev, cb, cb_ident); - if (!indr_block_cb) + if (!indr_dev) { + mutex_unlock(&flow_indr_block_lock); return; + } - flow_block_cmd(dev, indr_block_cb->cb, indr_block_cb->cb_priv, - FLOW_BLOCK_UNBIND); + __flow_block_indr_cleanup(setup_cb, cb_priv, &cleanup_list); + mutex_unlock(&flow_indr_block_lock); - flow_indr_block_cb_del(indr_block_cb); - flow_indr_block_dev_put(indr_dev); + flow_block_indr_notify(&cleanup_list); + kfree(indr_dev); } -EXPORT_SYMBOL_GPL(__flow_indr_block_cb_unregister); +EXPORT_SYMBOL(flow_indr_dev_unregister); -void flow_indr_block_cb_unregister(struct net_device *dev, - flow_indr_block_bind_cb_t *cb, - void *cb_ident) +static void flow_block_indr_init(struct flow_block_cb *flow_block, + struct flow_block_offload *bo, + struct net_device *dev, void *data, + void (*cleanup)(struct flow_block_cb *block_cb)) { - rtnl_lock(); - __flow_indr_block_cb_unregister(dev, cb, cb_ident); - rtnl_unlock(); + flow_block->indr.binder_type = bo->binder_type; + flow_block->indr.data = data; + flow_block->indr.dev = dev; + flow_block->indr.cleanup = cleanup; } -EXPORT_SYMBOL_GPL(flow_indr_block_cb_unregister); -void flow_indr_block_call(struct net_device *dev, - struct flow_block_offload *bo, - enum flow_block_command command, - enum tc_setup_type type) +static void __flow_block_indr_binding(struct flow_block_offload *bo, + struct net_device *dev, void *data, + void (*cleanup)(struct flow_block_cb *block_cb)) { - struct flow_indr_block_cb *indr_block_cb; - struct flow_indr_block_dev *indr_dev; - - indr_dev = flow_indr_block_dev_lookup(dev); - if (!indr_dev) - return; + struct flow_block_cb *block_cb; - list_for_each_entry(indr_block_cb, &indr_dev->cb_list, list) - indr_block_cb->cb(dev, indr_block_cb->cb_priv, type, bo); + list_for_each_entry(block_cb, &bo->cb_list, list) { + switch (bo->command) { + case FLOW_BLOCK_BIND: + flow_block_indr_init(block_cb, bo, dev, data, cleanup); + list_add(&block_cb->indr.list, &flow_block_indr_list); + break; + case FLOW_BLOCK_UNBIND: + list_del(&block_cb->indr.list); + break; + } + } } -EXPORT_SYMBOL_GPL(flow_indr_block_call); -void flow_indr_add_block_cb(struct flow_indr_block_entry *entry) +int flow_indr_dev_setup_offload(struct net_device *dev, + enum tc_setup_type type, void *data, + struct flow_block_offload *bo, + void (*cleanup)(struct flow_block_cb *block_cb)) { - mutex_lock(&flow_indr_block_cb_lock); - list_add_tail(&entry->list, &block_cb_list); - mutex_unlock(&flow_indr_block_cb_lock); -} -EXPORT_SYMBOL_GPL(flow_indr_add_block_cb); + struct flow_indr_dev *this; -void flow_indr_del_block_cb(struct flow_indr_block_entry *entry) -{ - mutex_lock(&flow_indr_block_cb_lock); - list_del(&entry->list); - mutex_unlock(&flow_indr_block_cb_lock); -} -EXPORT_SYMBOL_GPL(flow_indr_del_block_cb); + mutex_lock(&flow_indr_block_lock); + list_for_each_entry(this, &flow_block_indr_dev_list, list) + this->cb(dev, this->cb_priv, type, bo); -static int __init init_flow_indr_rhashtable(void) -{ - return rhashtable_init(&indr_setup_block_ht, - &flow_indr_setup_block_ht_params); + __flow_block_indr_binding(bo, dev, data, cleanup); + mutex_unlock(&flow_indr_block_lock); + + return list_empty(&bo->cb_list) ? -EOPNOTSUPP : 0; } -subsys_initcall(init_flow_indr_rhashtable); +EXPORT_SYMBOL(flow_indr_dev_setup_offload); diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 37e4dba62460..ef6b5a8f629c 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -1082,8 +1082,8 @@ static void neigh_timer_handler(struct timer_list *t) } if (neigh->nud_state & NUD_IN_TIMER) { - if (time_before(next, jiffies + HZ/2)) - next = jiffies + HZ/2; + if (time_before(next, jiffies + HZ/100)) + next = jiffies + HZ/100; if (!mod_timer(&neigh->timer, next)) neigh_hold(neigh); } diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 886490fb203d..4c7f086a047b 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -1746,6 +1746,7 @@ int dsa_slave_create(struct dsa_port *port) if (ds->ops->port_vlan_add && ds->ops->port_vlan_del) slave_dev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; slave_dev->hw_features |= NETIF_F_HW_TC; + slave_dev->features |= NETIF_F_LLTX; slave_dev->ethtool_ops = &dsa_slave_ethtool_ops; if (!IS_ERR_OR_NULL(port->mac)) ether_addr_copy(slave_dev->dev_addr, port->mac); diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c index f048d0a188b7..123a6d39438f 100644 --- a/net/ipv4/devinet.c +++ b/net/ipv4/devinet.c @@ -276,6 +276,7 @@ static struct in_device *inetdev_init(struct net_device *dev) err = devinet_sysctl_register(in_dev); if (err) { in_dev->dead = 1; + neigh_parms_release(&arp_tbl, in_dev->arp_parms); in_dev_put(in_dev); in_dev = NULL; goto out; diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c index 731022cff600..d14133eac476 100644 --- a/net/ipv4/esp4_offload.c +++ b/net/ipv4/esp4_offload.c @@ -63,10 +63,8 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head, sp->olen++; xo = xfrm_offload(skb); - if (!xo) { - xfrm_state_put(x); + if (!xo) goto out_reset; - } } xo->flags |= XFRM_GRO; @@ -139,19 +137,27 @@ static struct sk_buff *xfrm4_beet_gso_segment(struct xfrm_state *x, struct xfrm_offload *xo = xfrm_offload(skb); struct sk_buff *segs = ERR_PTR(-EINVAL); const struct net_offload *ops; - int proto = xo->proto; + u8 proto = xo->proto; skb->transport_header += x->props.header_len; - if (proto == IPPROTO_BEETPH) { - struct ip_beet_phdr *ph = (struct ip_beet_phdr *)skb->data; + if (x->sel.family != AF_INET6) { + if (proto == IPPROTO_BEETPH) { + struct ip_beet_phdr *ph = + (struct ip_beet_phdr *)skb->data; + + skb->transport_header += ph->hdrlen * 8; + proto = ph->nexthdr; + } else { + skb->transport_header -= IPV4_BEET_PHMAXLEN; + } + } else { + __be16 frag; - skb->transport_header += ph->hdrlen * 8; - proto = ph->nexthdr; - } else if (x->sel.family != AF_INET6) { - skb->transport_header -= IPV4_BEET_PHMAXLEN; - } else if (proto == IPPROTO_TCP) { - skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4; + skb->transport_header += + ipv6_skip_exthdr(skb, 0, &proto, &frag); + if (proto == IPPROTO_TCP) + skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4; } __skb_pull(skb, skb_transport_offset(skb)); diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c index 1bf9da3a75f9..41079490a118 100644 --- a/net/ipv4/fib_frontend.c +++ b/net/ipv4/fib_frontend.c @@ -309,17 +309,18 @@ bool fib_info_nh_uses_dev(struct fib_info *fi, const struct net_device *dev) { bool dev_match = false; #ifdef CONFIG_IP_ROUTE_MULTIPATH - int ret; + if (unlikely(fi->nh)) { + dev_match = nexthop_uses_dev(fi->nh, dev); + } else { + int ret; - for (ret = 0; ret < fib_info_num_path(fi); ret++) { - const struct fib_nh_common *nhc = fib_info_nhc(fi, ret); + for (ret = 0; ret < fib_info_num_path(fi); ret++) { + const struct fib_nh_common *nhc = fib_info_nhc(fi, ret); - if (nhc->nhc_dev == dev) { - dev_match = true; - break; - } else if (l3mdev_master_ifindex_rcu(nhc->nhc_dev) == dev->ifindex) { - dev_match = true; - break; + if (nhc_l3mdev_matches_dev(nhc, dev)) { + dev_match = true; + break; + } } } #else diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c index 4f334b425538..248f1c1959a6 100644 --- a/net/ipv4/fib_trie.c +++ b/net/ipv4/fib_trie.c @@ -1371,6 +1371,26 @@ static inline t_key prefix_mismatch(t_key key, struct key_vector *n) return (key ^ prefix) & (prefix | -prefix); } +bool fib_lookup_good_nhc(const struct fib_nh_common *nhc, int fib_flags, + const struct flowi4 *flp) +{ + if (nhc->nhc_flags & RTNH_F_DEAD) + return false; + + if (ip_ignore_linkdown(nhc->nhc_dev) && + nhc->nhc_flags & RTNH_F_LINKDOWN && + !(fib_flags & FIB_LOOKUP_IGNORE_LINKSTATE)) + return false; + + if (!(flp->flowi4_flags & FLOWI_FLAG_SKIP_NH_OIF)) { + if (flp->flowi4_oif && + flp->flowi4_oif != nhc->nhc_oif) + return false; + } + + return true; +} + /* should be called with rcu_read_lock */ int fib_table_lookup(struct fib_table *tb, const struct flowi4 *flp, struct fib_result *res, int fib_flags) @@ -1503,6 +1523,7 @@ found: /* Step 3: Process the leaf, if that fails fall back to backtracing */ hlist_for_each_entry_rcu(fa, &n->leaf, fa_list) { struct fib_info *fi = fa->fa_info; + struct fib_nh_common *nhc; int nhsel, err; if ((BITS_PER_LONG > KEYLENGTH) || (fa->fa_slen < KEYLENGTH)) { @@ -1528,26 +1549,25 @@ out_reject: if (fi->fib_flags & RTNH_F_DEAD) continue; - if (unlikely(fi->nh && nexthop_is_blackhole(fi->nh))) { - err = fib_props[RTN_BLACKHOLE].error; - goto out_reject; + if (unlikely(fi->nh)) { + if (nexthop_is_blackhole(fi->nh)) { + err = fib_props[RTN_BLACKHOLE].error; + goto out_reject; + } + + nhc = nexthop_get_nhc_lookup(fi->nh, fib_flags, flp, + &nhsel); + if (nhc) + goto set_result; + goto miss; } for (nhsel = 0; nhsel < fib_info_num_path(fi); nhsel++) { - struct fib_nh_common *nhc = fib_info_nhc(fi, nhsel); + nhc = fib_info_nhc(fi, nhsel); - if (nhc->nhc_flags & RTNH_F_DEAD) + if (!fib_lookup_good_nhc(nhc, fib_flags, flp)) continue; - if (ip_ignore_linkdown(nhc->nhc_dev) && - nhc->nhc_flags & RTNH_F_LINKDOWN && - !(fib_flags & FIB_LOOKUP_IGNORE_LINKSTATE)) - continue; - if (!(flp->flowi4_flags & FLOWI_FLAG_SKIP_NH_OIF)) { - if (flp->flowi4_oif && - flp->flowi4_oif != nhc->nhc_oif) - continue; - } - +set_result: if (!(fib_flags & FIB_LOOKUP_NOREF)) refcount_inc(&fi->fib_clntref); @@ -1568,6 +1588,7 @@ out_reject: return err; } } +miss: #ifdef CONFIG_IP_FIB_TRIE_STATS this_cpu_inc(stats->semantic_match_miss); #endif diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c index c8974360a99f..1d9c8cff5ac3 100644 --- a/net/ipv4/ip_vti.c +++ b/net/ipv4/ip_vti.c @@ -93,7 +93,28 @@ static int vti_rcv_proto(struct sk_buff *skb) static int vti_rcv_tunnel(struct sk_buff *skb) { - return vti_rcv(skb, ip_hdr(skb)->saddr, true); + struct ip_tunnel_net *itn = net_generic(dev_net(skb->dev), vti_net_id); + const struct iphdr *iph = ip_hdr(skb); + struct ip_tunnel *tunnel; + + tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, TUNNEL_NO_KEY, + iph->saddr, iph->daddr, 0); + if (tunnel) { + struct tnl_ptk_info tpi = { + .proto = htons(ETH_P_IP), + }; + + if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) + goto drop; + if (iptunnel_pull_header(skb, 0, tpi.proto, false)) + goto drop; + return ip_tunnel_rcv(tunnel, skb, &tpi, NULL, false); + } + + return -EINVAL; +drop: + kfree_skb(skb); + return 0; } static int vti_rcv_cb(struct sk_buff *skb, int err) diff --git a/net/ipv4/netfilter/nf_nat_pptp.c b/net/ipv4/netfilter/nf_nat_pptp.c index 3c25a467b3ef..7afde8828b4c 100644 --- a/net/ipv4/netfilter/nf_nat_pptp.c +++ b/net/ipv4/netfilter/nf_nat_pptp.c @@ -166,8 +166,7 @@ pptp_outbound_pkt(struct sk_buff *skb, break; default: pr_debug("unknown outbound packet 0x%04x:%s\n", msg, - msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : - pptp_msg_name[0]); + pptp_msg_name(msg)); fallthrough; case PPTP_SET_LINK_INFO: /* only need to NAT in case PAC is behind NAT box */ @@ -268,9 +267,7 @@ pptp_inbound_pkt(struct sk_buff *skb, pcid_off = offsetof(union pptp_ctrl_union, setlink.peersCallID); break; default: - pr_debug("unknown inbound packet %s\n", - msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : - pptp_msg_name[0]); + pr_debug("unknown inbound packet %s\n", pptp_msg_name(msg)); fallthrough; case PPTP_START_SESSION_REQUEST: case PPTP_START_SESSION_REPLY: diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c index ec1282858cb7..400a9f89ebdb 100644 --- a/net/ipv4/nexthop.c +++ b/net/ipv4/nexthop.c @@ -75,9 +75,16 @@ static void nexthop_free_mpath(struct nexthop *nh) int i; nhg = rcu_dereference_raw(nh->nh_grp); - for (i = 0; i < nhg->num_nh; ++i) - WARN_ON(nhg->nh_entries[i].nh); + for (i = 0; i < nhg->num_nh; ++i) { + struct nh_grp_entry *nhge = &nhg->nh_entries[i]; + + WARN_ON(!list_empty(&nhge->nh_list)); + nexthop_put(nhge->nh); + } + + WARN_ON(nhg->spare == nhg); + kfree(nhg->spare); kfree(nhg); } @@ -758,41 +765,56 @@ static void nh_group_rebalance(struct nh_group *nhg) } } -static void remove_nh_grp_entry(struct nh_grp_entry *nhge, - struct nh_group *nhg, +static void remove_nh_grp_entry(struct net *net, struct nh_grp_entry *nhge, struct nl_info *nlinfo) { + struct nh_grp_entry *nhges, *new_nhges; + struct nexthop *nhp = nhge->nh_parent; struct nexthop *nh = nhge->nh; - struct nh_grp_entry *nhges; - bool found = false; - int i; + struct nh_group *nhg, *newg; + int i, j; WARN_ON(!nh); - nhges = nhg->nh_entries; - for (i = 0; i < nhg->num_nh; ++i) { - if (found) { - nhges[i-1].nh = nhges[i].nh; - nhges[i-1].weight = nhges[i].weight; - list_del(&nhges[i].nh_list); - list_add(&nhges[i-1].nh_list, &nhges[i-1].nh->grp_list); - } else if (nhg->nh_entries[i].nh == nh) { - found = true; - } - } + nhg = rtnl_dereference(nhp->nh_grp); + newg = nhg->spare; - if (WARN_ON(!found)) + /* last entry, keep it visible and remove the parent */ + if (nhg->num_nh == 1) { + remove_nexthop(net, nhp, nlinfo); return; + } + + newg->has_v4 = nhg->has_v4; + newg->mpath = nhg->mpath; + newg->num_nh = nhg->num_nh; - nhg->num_nh--; - nhg->nh_entries[nhg->num_nh].nh = NULL; + /* copy old entries to new except the one getting removed */ + nhges = nhg->nh_entries; + new_nhges = newg->nh_entries; + for (i = 0, j = 0; i < nhg->num_nh; ++i) { + /* current nexthop getting removed */ + if (nhg->nh_entries[i].nh == nh) { + newg->num_nh--; + continue; + } - nh_group_rebalance(nhg); + list_del(&nhges[i].nh_list); + new_nhges[j].nh_parent = nhges[i].nh_parent; + new_nhges[j].nh = nhges[i].nh; + new_nhges[j].weight = nhges[i].weight; + list_add(&new_nhges[j].nh_list, &new_nhges[j].nh->grp_list); + j++; + } - nexthop_put(nh); + nh_group_rebalance(newg); + rcu_assign_pointer(nhp->nh_grp, newg); + + list_del(&nhge->nh_list); + nexthop_put(nhge->nh); if (nlinfo) - nexthop_notify(RTM_NEWNEXTHOP, nhge->nh_parent, nlinfo); + nexthop_notify(RTM_NEWNEXTHOP, nhp, nlinfo); } static void remove_nexthop_from_groups(struct net *net, struct nexthop *nh, @@ -800,17 +822,11 @@ static void remove_nexthop_from_groups(struct net *net, struct nexthop *nh, { struct nh_grp_entry *nhge, *tmp; - list_for_each_entry_safe(nhge, tmp, &nh->grp_list, nh_list) { - struct nh_group *nhg; - - list_del(&nhge->nh_list); - nhg = rtnl_dereference(nhge->nh_parent->nh_grp); - remove_nh_grp_entry(nhge, nhg, nlinfo); + list_for_each_entry_safe(nhge, tmp, &nh->grp_list, nh_list) + remove_nh_grp_entry(net, nhge, nlinfo); - /* if this group has no more entries then remove it */ - if (!nhg->num_nh) - remove_nexthop(net, nhge->nh_parent, nlinfo); - } + /* make sure all see the newly published array before releasing rtnl */ + synchronize_rcu(); } static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo) @@ -824,10 +840,7 @@ static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo) if (WARN_ON(!nhge->nh)) continue; - list_del(&nhge->nh_list); - nexthop_put(nhge->nh); - nhge->nh = NULL; - nhg->num_nh--; + list_del_init(&nhge->nh_list); } } @@ -1153,6 +1166,7 @@ static struct nexthop *nexthop_create_group(struct net *net, { struct nlattr *grps_attr = cfg->nh_grp; struct nexthop_grp *entry = nla_data(grps_attr); + u16 num_nh = nla_len(grps_attr) / sizeof(*entry); struct nh_group *nhg; struct nexthop *nh; int i; @@ -1163,12 +1177,21 @@ static struct nexthop *nexthop_create_group(struct net *net, nh->is_group = 1; - nhg = nexthop_grp_alloc(nla_len(grps_attr) / sizeof(*entry)); + nhg = nexthop_grp_alloc(num_nh); if (!nhg) { kfree(nh); return ERR_PTR(-ENOMEM); } + /* spare group used for removals */ + nhg->spare = nexthop_grp_alloc(num_nh); + if (!nhg->spare) { + kfree(nhg); + kfree(nh); + return ERR_PTR(-ENOMEM); + } + nhg->spare->spare = nhg; + for (i = 0; i < nhg->num_nh; ++i) { struct nexthop *nhe; struct nh_info *nhi; @@ -1203,6 +1226,7 @@ out_no_nh: for (; i >= 0; --i) nexthop_put(nhg->nh_entries[i].nh); + kfree(nhg->spare); kfree(nhg); kfree(nh); diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c index 06163cc15844..55addea1948f 100644 --- a/net/ipv6/esp6_offload.c +++ b/net/ipv6/esp6_offload.c @@ -85,10 +85,8 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head, sp->olen++; xo = xfrm_offload(skb); - if (!xo) { - xfrm_state_put(x); + if (!xo) goto out_reset; - } } xo->flags |= XFRM_GRO; @@ -123,9 +121,16 @@ static void esp6_gso_encap(struct xfrm_state *x, struct sk_buff *skb) struct ip_esp_hdr *esph; struct ipv6hdr *iph = ipv6_hdr(skb); struct xfrm_offload *xo = xfrm_offload(skb); - int proto = iph->nexthdr; + u8 proto = iph->nexthdr; skb_push(skb, -skb_network_offset(skb)); + + if (x->outer_mode.encap == XFRM_MODE_TRANSPORT) { + __be16 frag; + + ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &proto, &frag); + } + esph = ip_esp_hdr(skb); *skb_mac_header(skb) = IPPROTO_ESP; @@ -166,23 +171,31 @@ static struct sk_buff *xfrm6_beet_gso_segment(struct xfrm_state *x, struct xfrm_offload *xo = xfrm_offload(skb); struct sk_buff *segs = ERR_PTR(-EINVAL); const struct net_offload *ops; - int proto = xo->proto; + u8 proto = xo->proto; skb->transport_header += x->props.header_len; - if (proto == IPPROTO_BEETPH) { - struct ip_beet_phdr *ph = (struct ip_beet_phdr *)skb->data; - - skb->transport_header += ph->hdrlen * 8; - proto = ph->nexthdr; - } - if (x->sel.family != AF_INET6) { skb->transport_header -= (sizeof(struct ipv6hdr) - sizeof(struct iphdr)); + if (proto == IPPROTO_BEETPH) { + struct ip_beet_phdr *ph = + (struct ip_beet_phdr *)skb->data; + + skb->transport_header += ph->hdrlen * 8; + proto = ph->nexthdr; + } else { + skb->transport_header -= IPV4_BEET_PHMAXLEN; + } + if (proto == IPPROTO_TCP) skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV6; + } else { + __be16 frag; + + skb->transport_header += + ipv6_skip_exthdr(skb, 0, &proto, &frag); } __skb_pull(skb, skb_transport_offset(skb)); diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c index fcb53ed1c4fb..6d7ef78c88af 100644 --- a/net/l2tp/l2tp_core.c +++ b/net/l2tp/l2tp_core.c @@ -1458,6 +1458,9 @@ static int l2tp_validate_socket(const struct sock *sk, const struct net *net, if (sk->sk_type != SOCK_DGRAM) return -EPROTONOSUPPORT; + if (sk->sk_family != PF_INET && sk->sk_family != PF_INET6) + return -EPROTONOSUPPORT; + if ((encap == L2TP_ENCAPTYPE_UDP && sk->sk_protocol != IPPROTO_UDP) || (encap == L2TP_ENCAPTYPE_IP && sk->sk_protocol != IPPROTO_L2TP)) return -EPROTONOSUPPORT; diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c index 0d7c887a2b75..955662a6dee7 100644 --- a/net/l2tp/l2tp_ip.c +++ b/net/l2tp/l2tp_ip.c @@ -20,7 +20,6 @@ #include <net/icmp.h> #include <net/udp.h> #include <net/inet_common.h> -#include <net/inet_hashtables.h> #include <net/tcp_states.h> #include <net/protocol.h> #include <net/xfrm.h> @@ -209,15 +208,31 @@ discard: return 0; } -static int l2tp_ip_open(struct sock *sk) +static int l2tp_ip_hash(struct sock *sk) { - /* Prevent autobind. We don't have ports. */ - inet_sk(sk)->inet_num = IPPROTO_L2TP; + if (sk_unhashed(sk)) { + write_lock_bh(&l2tp_ip_lock); + sk_add_node(sk, &l2tp_ip_table); + write_unlock_bh(&l2tp_ip_lock); + } + return 0; +} +static void l2tp_ip_unhash(struct sock *sk) +{ + if (sk_unhashed(sk)) + return; write_lock_bh(&l2tp_ip_lock); - sk_add_node(sk, &l2tp_ip_table); + sk_del_node_init(sk); write_unlock_bh(&l2tp_ip_lock); +} + +static int l2tp_ip_open(struct sock *sk) +{ + /* Prevent autobind. We don't have ports. */ + inet_sk(sk)->inet_num = IPPROTO_L2TP; + l2tp_ip_hash(sk); return 0; } @@ -594,8 +609,8 @@ static struct proto l2tp_ip_prot = { .sendmsg = l2tp_ip_sendmsg, .recvmsg = l2tp_ip_recvmsg, .backlog_rcv = l2tp_ip_backlog_recv, - .hash = inet_hash, - .unhash = inet_unhash, + .hash = l2tp_ip_hash, + .unhash = l2tp_ip_unhash, .obj_size = sizeof(struct l2tp_ip_sock), #ifdef CONFIG_COMPAT .compat_setsockopt = compat_ip_setsockopt, diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c index fdfef926c591..526ed2c24dd5 100644 --- a/net/l2tp/l2tp_ip6.c +++ b/net/l2tp/l2tp_ip6.c @@ -20,8 +20,6 @@ #include <net/icmp.h> #include <net/udp.h> #include <net/inet_common.h> -#include <net/inet_hashtables.h> -#include <net/inet6_hashtables.h> #include <net/tcp_states.h> #include <net/protocol.h> #include <net/xfrm.h> @@ -222,15 +220,31 @@ discard: return 0; } -static int l2tp_ip6_open(struct sock *sk) +static int l2tp_ip6_hash(struct sock *sk) { - /* Prevent autobind. We don't have ports. */ - inet_sk(sk)->inet_num = IPPROTO_L2TP; + if (sk_unhashed(sk)) { + write_lock_bh(&l2tp_ip6_lock); + sk_add_node(sk, &l2tp_ip6_table); + write_unlock_bh(&l2tp_ip6_lock); + } + return 0; +} +static void l2tp_ip6_unhash(struct sock *sk) +{ + if (sk_unhashed(sk)) + return; write_lock_bh(&l2tp_ip6_lock); - sk_add_node(sk, &l2tp_ip6_table); + sk_del_node_init(sk); write_unlock_bh(&l2tp_ip6_lock); +} + +static int l2tp_ip6_open(struct sock *sk) +{ + /* Prevent autobind. We don't have ports. */ + inet_sk(sk)->inet_num = IPPROTO_L2TP; + l2tp_ip6_hash(sk); return 0; } @@ -728,8 +742,8 @@ static struct proto l2tp_ip6_prot = { .sendmsg = l2tp_ip6_sendmsg, .recvmsg = l2tp_ip6_recvmsg, .backlog_rcv = l2tp_ip6_backlog_recv, - .hash = inet6_hash, - .unhash = inet_unhash, + .hash = l2tp_ip6_hash, + .unhash = l2tp_ip6_unhash, .obj_size = sizeof(struct l2tp_ip6_sock), #ifdef CONFIG_COMPAT .compat_setsockopt = compat_ipv6_setsockopt, diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c index 4d1c335e06e5..7f245e9f114c 100644 --- a/net/mac80211/agg-rx.c +++ b/net/mac80211/agg-rx.c @@ -9,7 +9,7 @@ * Copyright 2007, Michael Wu <flamingice@sourmilk.net> * Copyright 2007-2010, Intel Corporation * Copyright(c) 2015-2017 Intel Deutschland GmbH - * Copyright (C) 2018 Intel Corporation + * Copyright (C) 2018-2020 Intel Corporation */ /** @@ -292,7 +292,8 @@ void ___ieee80211_start_rx_ba_session(struct sta_info *sta, goto end; } - if (!sta->sta.ht_cap.ht_supported) { + if (!sta->sta.ht_cap.ht_supported && + sta->sdata->vif.bss_conf.chandef.chan->band != NL80211_BAND_6GHZ) { ht_dbg(sta->sdata, "STA %pM erroneously requests BA session on tid %d w/o QoS\n", sta->sta.addr, tid); diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c index c2d5f512526d..b37c8a983d88 100644 --- a/net/mac80211/agg-tx.c +++ b/net/mac80211/agg-tx.c @@ -593,7 +593,8 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid, "Requested to start BA session on reserved tid=%d", tid)) return -EINVAL; - if (!pubsta->ht_cap.ht_supported) + if (!pubsta->ht_cap.ht_supported && + sta->sdata->vif.bss_conf.chandef.chan->band != NL80211_BAND_6GHZ) return -EINVAL; if (WARN_ON_ONCE(!local->ops->ampdu_action)) diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c index 548a384b0509..9b360544ad6f 100644 --- a/net/mac80211/cfg.c +++ b/net/mac80211/cfg.c @@ -1520,7 +1520,9 @@ static int sta_apply_parameters(struct ieee80211_local *local, if (params->he_capa) ieee80211_he_cap_ie_to_sta_he_cap(sdata, sband, (void *)params->he_capa, - params->he_capa_len, sta); + params->he_capa_len, + (void *)params->he_6ghz_capa, + sta); if (params->opmode_notif_used) { /* returned value is only needed for rc update, but the @@ -2196,7 +2198,8 @@ static int ieee80211_change_bss(struct wiphy *wiphy, } if (!sdata->vif.bss_conf.use_short_slot && - sband->band == NL80211_BAND_5GHZ) { + (sband->band == NL80211_BAND_5GHZ || + sband->band == NL80211_BAND_6GHZ)) { sdata->vif.bss_conf.use_short_slot = true; changed |= BSS_CHANGED_ERP_SLOT; } @@ -3957,7 +3960,7 @@ static int ieee80211_set_tid_config(struct wiphy *wiphy, static int ieee80211_reset_tid_config(struct wiphy *wiphy, struct net_device *dev, - const u8 *peer, u8 tid) + const u8 *peer, u8 tids) { struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); struct sta_info *sta; @@ -3967,7 +3970,7 @@ static int ieee80211_reset_tid_config(struct wiphy *wiphy, return -EOPNOTSUPP; if (!peer) - return drv_reset_tid_config(sdata->local, sdata, NULL, tid); + return drv_reset_tid_config(sdata->local, sdata, NULL, tids); mutex_lock(&sdata->local->sta_mtx); sta = sta_info_get_bss(sdata, peer); @@ -3976,7 +3979,7 @@ static int ieee80211_reset_tid_config(struct wiphy *wiphy, return -ENOENT; } - ret = drv_reset_tid_config(sdata->local, sdata, &sta->sta, tid); + ret = drv_reset_tid_config(sdata->local, sdata, &sta->sta, tids); mutex_unlock(&sdata->local->sta_mtx); return ret; diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h index 3877710e3b48..de69fc9c4f07 100644 --- a/net/mac80211/driver-ops.h +++ b/net/mac80211/driver-ops.h @@ -1375,12 +1375,12 @@ static inline int drv_set_tid_config(struct ieee80211_local *local, static inline int drv_reset_tid_config(struct ieee80211_local *local, struct ieee80211_sub_if_data *sdata, - struct ieee80211_sta *sta, u8 tid) + struct ieee80211_sta *sta, u8 tids) { int ret; might_sleep(); - ret = local->ops->reset_tid_config(&local->hw, &sdata->vif, sta, tid); + ret = local->ops->reset_tid_config(&local->hw, &sdata->vif, sta, tids); trace_drv_return_int(local, ret); return ret; diff --git a/net/mac80211/he.c b/net/mac80211/he.c index f520552b22be..cc26f239838b 100644 --- a/net/mac80211/he.c +++ b/net/mac80211/he.c @@ -8,10 +8,55 @@ #include "ieee80211_i.h" +static void +ieee80211_update_from_he_6ghz_capa(const struct ieee80211_he_6ghz_capa *he_6ghz_capa, + struct sta_info *sta) +{ + enum ieee80211_smps_mode smps_mode; + + if (sta->sdata->vif.type == NL80211_IFTYPE_AP || + sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN) { + switch (le16_get_bits(he_6ghz_capa->capa, + IEEE80211_HE_6GHZ_CAP_SM_PS)) { + case WLAN_HT_CAP_SM_PS_INVALID: + case WLAN_HT_CAP_SM_PS_STATIC: + smps_mode = IEEE80211_SMPS_STATIC; + break; + case WLAN_HT_CAP_SM_PS_DYNAMIC: + smps_mode = IEEE80211_SMPS_DYNAMIC; + break; + case WLAN_HT_CAP_SM_PS_DISABLED: + smps_mode = IEEE80211_SMPS_OFF; + break; + } + + sta->sta.smps_mode = smps_mode; + } else { + sta->sta.smps_mode = IEEE80211_SMPS_OFF; + } + + switch (le16_get_bits(he_6ghz_capa->capa, + IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN)) { + case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454: + sta->sta.max_amsdu_len = IEEE80211_MAX_MPDU_LEN_VHT_11454; + break; + case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991: + sta->sta.max_amsdu_len = IEEE80211_MAX_MPDU_LEN_VHT_7991; + break; + case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895: + default: + sta->sta.max_amsdu_len = IEEE80211_MAX_MPDU_LEN_VHT_3895; + break; + } + + sta->sta.he_6ghz_capa = *he_6ghz_capa; +} + void ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata, struct ieee80211_supported_band *sband, const u8 *he_cap_ie, u8 he_cap_len, + const struct ieee80211_he_6ghz_capa *he_6ghz_capa, struct sta_info *sta) { struct ieee80211_sta_he_cap *he_cap = &sta->sta.he_cap; @@ -53,6 +98,9 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata, sta->cur_max_bandwidth = ieee80211_sta_cap_rx_bw(sta); sta->sta.bandwidth = ieee80211_sta_cur_vht_bw(sta); + + if (sband->band == NL80211_BAND_6GHZ && he_6ghz_capa) + ieee80211_update_from_he_6ghz_capa(he_6ghz_capa, sta); } void diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c index 2479cd48fed0..81d26fef41e9 100644 --- a/net/mac80211/ibss.c +++ b/net/mac80211/ibss.c @@ -9,7 +9,7 @@ * Copyright 2009, Johannes Berg <johannes@sipsolutions.net> * Copyright 2013-2014 Intel Mobile Communications GmbH * Copyright(c) 2016 Intel Deutschland GmbH - * Copyright(c) 2018-2019 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #include <linux/delay.h> @@ -781,6 +781,7 @@ ieee80211_ibss_process_chanswitch(struct ieee80211_sub_if_data *sdata, enum nl80211_channel_type ch_type; int err; u32 sta_flags; + u32 vht_cap_info = 0; sdata_assert_lock(sdata); @@ -798,9 +799,13 @@ ieee80211_ibss_process_chanswitch(struct ieee80211_sub_if_data *sdata, break; } + if (elems->vht_cap_elem) + vht_cap_info = le32_to_cpu(elems->vht_cap_elem->vht_cap_info); + memset(¶ms, 0, sizeof(params)); err = ieee80211_parse_ch_switch_ie(sdata, elems, ifibss->chandef.chan->band, + vht_cap_info, sta_flags, ifibss->bssid, &csa_ie); /* can't switch to destination channel, fail */ if (err < 0) @@ -1060,8 +1065,10 @@ static void ieee80211_update_sta_info(struct ieee80211_sub_if_data *sdata, /* we both use VHT */ struct ieee80211_vht_cap cap_ie; struct ieee80211_sta_vht_cap cap = sta->sta.vht_cap; + u32 vht_cap_info = + le32_to_cpu(elems->vht_cap_elem->vht_cap_info); - ieee80211_chandef_vht_oper(&local->hw, + ieee80211_chandef_vht_oper(&local->hw, vht_cap_info, elems->vht_operation, elems->ht_operation, &chandef); diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 8cbae66b5cdb..ec1a71ac65f2 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -111,6 +111,8 @@ struct ieee80211_bss { size_t supp_rates_len; struct ieee80211_rate *beacon_rate; + u32 vht_cap_info; + /* * During association, we save an ERP value from a probe response so * that we can feed ERP info to the driver when handling the @@ -267,7 +269,7 @@ struct probe_resp { struct rcu_head rcu_head; int len; u16 csa_counter_offsets[IEEE80211_MAX_CSA_COUNTERS_NUM]; - u8 data[0]; + u8 data[]; }; struct ps_data { @@ -1494,6 +1496,7 @@ struct ieee802_11_elems { const struct ieee80211_he_operation *he_operation; const struct ieee80211_he_spr *he_spr; const struct ieee80211_mu_edca_param_set *mu_edca_param_set; + const struct ieee80211_he_6ghz_capa *he_6ghz_capa; const u8 *uora_element; const u8 *mesh_id; const u8 *peering; @@ -1783,7 +1786,8 @@ netdev_tx_t ieee80211_subif_start_xmit_8023(struct sk_buff *skb, void __ieee80211_subif_start_xmit(struct sk_buff *skb, struct net_device *dev, u32 info_flags, - u32 ctrl_flags); + u32 ctrl_flags, + u64 *cookie); void ieee80211_purge_tx_queue(struct ieee80211_hw *hw, struct sk_buff_head *skbs); struct sk_buff * @@ -1800,7 +1804,8 @@ void ieee80211_check_fast_xmit_iface(struct ieee80211_sub_if_data *sdata); void ieee80211_clear_fast_xmit(struct sta_info *sta); int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev, const u8 *buf, size_t len, - const u8 *dest, __be16 proto, bool unencrypted); + const u8 *dest, __be16 proto, bool unencrypted, + u64 *cookie); int ieee80211_probe_mesh_link(struct wiphy *wiphy, struct net_device *dev, const u8 *buf, size_t len); @@ -1894,6 +1899,7 @@ void ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata, struct ieee80211_supported_band *sband, const u8 *he_cap_ie, u8 he_cap_len, + const struct ieee80211_he_6ghz_capa *he_6ghz_capa, struct sta_info *sta); void ieee80211_he_spr_ie_to_bss_conf(struct ieee80211_vif *vif, @@ -1912,6 +1918,7 @@ void ieee80211_process_measurement_req(struct ieee80211_sub_if_data *sdata, * @sdata: the sdata of the interface which has received the frame * @elems: parsed 802.11 elements received with the frame * @current_band: indicates the current band + * @vht_cap_info: VHT capabilities of the transmitter * @sta_flags: contains information about own capabilities and restrictions * to decide which channel switch announcements can be accepted. Only the * following subset of &enum ieee80211_sta_flags are evaluated: @@ -1926,6 +1933,7 @@ void ieee80211_process_measurement_req(struct ieee80211_sub_if_data *sdata, int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata, struct ieee802_11_elems *elems, enum nl80211_band current_band, + u32 vht_cap_info, u32 sta_flags, u8 *bssid, struct ieee80211_csa_ie *csa_ie); @@ -2136,7 +2144,7 @@ enum { IEEE80211_PROBE_FLAG_RANDOM_SN = BIT(2), }; -int ieee80211_build_preq_ies(struct ieee80211_local *local, u8 *buffer, +int ieee80211_build_preq_ies(struct ieee80211_sub_if_data *sdata, u8 *buffer, size_t buffer_len, struct ieee80211_scan_ies *ie_desc, const u8 *ie, size_t ie_len, @@ -2174,7 +2182,9 @@ u8 ieee80211_ie_len_he_cap(struct ieee80211_sub_if_data *sdata, u8 iftype); u8 *ieee80211_ie_build_he_cap(u8 *pos, const struct ieee80211_sta_he_cap *he_cap, u8 *end); -u8 *ieee80211_ie_build_he_oper(u8 *pos); +void ieee80211_ie_build_he_6ghz_cap(struct ieee80211_sub_if_data *sdata, + struct sk_buff *skb); +u8 *ieee80211_ie_build_he_oper(u8 *pos, struct cfg80211_chan_def *chandef); int ieee80211_parse_bitrates(struct cfg80211_chan_def *chandef, const struct ieee80211_supported_band *sband, const u8 *srates, int srates_len, u32 *rates); @@ -2189,10 +2199,13 @@ u8 *ieee80211_add_wmm_info_ie(u8 *buf, u8 qosinfo); /* channel management */ bool ieee80211_chandef_ht_oper(const struct ieee80211_ht_operation *ht_oper, struct cfg80211_chan_def *chandef); -bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, +bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, u32 vht_cap_info, const struct ieee80211_vht_operation *oper, const struct ieee80211_ht_operation *htop, struct cfg80211_chan_def *chandef); +bool ieee80211_chandef_he_6ghz_oper(struct ieee80211_sub_if_data *sdata, + const struct ieee80211_he_operation *he_oper, + struct cfg80211_chan_def *chandef); u32 ieee80211_chandef_downgrade(struct cfg80211_chan_def *c); int __must_check diff --git a/net/mac80211/main.c b/net/mac80211/main.c index 06c90d360633..b4a2efe8e83a 100644 --- a/net/mac80211/main.c +++ b/net/mac80211/main.c @@ -596,6 +596,10 @@ struct ieee80211_hw *ieee80211_alloc_hw_nm(size_t priv_data_len, NL80211_EXT_FEATURE_CONTROL_PORT_OVER_NL80211); wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_CONTROL_PORT_NO_PREAUTH); + wiphy_ext_feature_set(wiphy, + NL80211_EXT_FEATURE_CONTROL_PORT_OVER_NL80211_TX_STATUS); + wiphy_ext_feature_set(wiphy, + NL80211_EXT_FEATURE_SCAN_FREQ_KHZ); if (!ops->hw_scan) { wiphy->features |= NL80211_FEATURE_LOW_PRIORITY_SCAN | diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c index 5930d07b1e43..5f1ca25b6c97 100644 --- a/net/mac80211/mesh.c +++ b/net/mac80211/mesh.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* * Copyright (c) 2008, 2009 open80211s Ltd. - * Copyright (C) 2018 - 2019 Intel Corporation + * Copyright (C) 2018 - 2020 Intel Corporation * Authors: Luis Carlos Cobo <luisca@cozybit.com> * Javier Cardona <javier@cozybit.com> */ @@ -63,6 +63,7 @@ bool mesh_matches_local(struct ieee80211_sub_if_data *sdata, u32 basic_rates = 0; struct cfg80211_chan_def sta_chan_def; struct ieee80211_supported_band *sband; + u32 vht_cap_info = 0; /* * As support for each feature is added, check for matching @@ -96,9 +97,14 @@ bool mesh_matches_local(struct ieee80211_sub_if_data *sdata, cfg80211_chandef_create(&sta_chan_def, sdata->vif.bss_conf.chandef.chan, NL80211_CHAN_NO_HT); ieee80211_chandef_ht_oper(ie->ht_operation, &sta_chan_def); - ieee80211_chandef_vht_oper(&sdata->local->hw, + + if (ie->vht_cap_elem) + vht_cap_info = le32_to_cpu(ie->vht_cap_elem->vht_cap_info); + + ieee80211_chandef_vht_oper(&sdata->local->hw, vht_cap_info, ie->vht_operation, ie->ht_operation, &sta_chan_def); + ieee80211_chandef_he_6ghz_oper(sdata, ie->he_operation, &sta_chan_def); if (!cfg80211_chandef_compatible(&sdata->vif.bss_conf.chandef, &sta_chan_def)) @@ -415,6 +421,10 @@ int mesh_add_ht_cap_ie(struct ieee80211_sub_if_data *sdata, if (!sband) return -EINVAL; + /* HT not allowed in 6 GHz */ + if (sband->band == NL80211_BAND_6GHZ) + return 0; + if (!sband->ht_cap.ht_supported || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_5 || @@ -452,6 +462,10 @@ int mesh_add_ht_oper_ie(struct ieee80211_sub_if_data *sdata, sband = local->hw.wiphy->bands[channel->band]; ht_cap = &sband->ht_cap; + /* HT not allowed in 6 GHz */ + if (sband->band == NL80211_BAND_6GHZ) + return 0; + if (!ht_cap->ht_supported || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_5 || @@ -479,6 +493,10 @@ int mesh_add_vht_cap_ie(struct ieee80211_sub_if_data *sdata, if (!sband) return -EINVAL; + /* VHT not allowed in 6 GHz */ + if (sband->band == NL80211_BAND_6GHZ) + return 0; + if (!sband->vht_cap.vht_supported || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_5 || @@ -516,6 +534,10 @@ int mesh_add_vht_oper_ie(struct ieee80211_sub_if_data *sdata, sband = local->hw.wiphy->bands[channel->band]; vht_cap = &sband->vht_cap; + /* VHT not allowed in 6 GHz */ + if (sband->band == NL80211_BAND_6GHZ) + return 0; + if (!vht_cap->vht_supported || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT || sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_5 || @@ -565,6 +587,7 @@ int mesh_add_he_oper_ie(struct ieee80211_sub_if_data *sdata, { const struct ieee80211_sta_he_cap *he_cap; struct ieee80211_supported_band *sband; + u32 len; u8 *pos; sband = ieee80211_get_sband(sdata); @@ -578,12 +601,23 @@ int mesh_add_he_oper_ie(struct ieee80211_sub_if_data *sdata, sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_10) return 0; - if (skb_tailroom(skb) < 2 + 1 + sizeof(struct ieee80211_he_operation)) + len = 2 + 1 + sizeof(struct ieee80211_he_operation); + if (sdata->vif.bss_conf.chandef.chan->band == NL80211_BAND_6GHZ) + len += sizeof(struct ieee80211_he_6ghz_oper); + + if (skb_tailroom(skb) < len) return -ENOMEM; - pos = skb_put(skb, 2 + 1 + sizeof(struct ieee80211_he_operation)); - ieee80211_ie_build_he_oper(pos); + pos = skb_put(skb, len); + ieee80211_ie_build_he_oper(pos, &sdata->vif.bss_conf.chandef); + + return 0; +} +int mesh_add_he_6ghz_cap_ie(struct ieee80211_sub_if_data *sdata, + struct sk_buff *skb) +{ + ieee80211_ie_build_he_6ghz_cap(sdata, skb); return 0; } @@ -766,6 +800,8 @@ ieee80211_mesh_build_beacon(struct ieee80211_if_mesh *ifmsh) 2 + sizeof(struct ieee80211_vht_operation) + ie_len_he_cap + 2 + 1 + sizeof(struct ieee80211_he_operation) + + sizeof(struct ieee80211_he_6ghz_oper) + + 2 + 1 + sizeof(struct ieee80211_he_6ghz_capa) + ifmsh->ie_len; bcn = kzalloc(sizeof(*bcn) + head_len + tail_len, GFP_KERNEL); @@ -885,6 +921,7 @@ ieee80211_mesh_build_beacon(struct ieee80211_if_mesh *ifmsh) mesh_add_vht_oper_ie(sdata, skb) || mesh_add_he_cap_ie(sdata, skb, ie_len_he_cap) || mesh_add_he_oper_ie(sdata, skb) || + mesh_add_he_6ghz_cap_ie(sdata, skb) || mesh_add_vendor_ies(sdata, skb)) goto out_free; @@ -1045,7 +1082,7 @@ ieee80211_mesh_process_chnswitch(struct ieee80211_sub_if_data *sdata, struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; struct ieee80211_supported_band *sband; int err; - u32 sta_flags; + u32 sta_flags, vht_cap_info = 0; sdata_assert_lock(sdata); @@ -1068,8 +1105,13 @@ ieee80211_mesh_process_chnswitch(struct ieee80211_sub_if_data *sdata, break; } + if (elems->vht_cap_elem) + vht_cap_info = + le32_to_cpu(elems->vht_cap_elem->vht_cap_info); + memset(¶ms, 0, sizeof(params)); err = ieee80211_parse_ch_switch_ie(sdata, elems, sband->band, + vht_cap_info, sta_flags, sdata->vif.addr, &csa_ie); if (err < 0) diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h index 953f720754e8..40492d1bd8fd 100644 --- a/net/mac80211/mesh.h +++ b/net/mac80211/mesh.h @@ -222,6 +222,8 @@ int mesh_add_he_cap_ie(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb, u8 ie_len); int mesh_add_he_oper_ie(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb); +int mesh_add_he_6ghz_cap_ie(struct ieee80211_sub_if_data *sdata, + struct sk_buff *skb); void mesh_rmc_free(struct ieee80211_sub_if_data *sdata); int mesh_rmc_init(struct ieee80211_sub_if_data *sdata); void ieee80211s_init(void); diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c index 38a0383dfbcf..aa5150929996 100644 --- a/net/mac80211/mesh_hwmp.c +++ b/net/mac80211/mesh_hwmp.c @@ -1103,7 +1103,14 @@ void mesh_path_start_discovery(struct ieee80211_sub_if_data *sdata) mesh_path_sel_frame_tx(MPATH_PREQ, 0, sdata->vif.addr, ifmsh->sn, target_flags, mpath->dst, mpath->sn, da, 0, ttl, lifetime, 0, ifmsh->preq_id++, sdata); + + spin_lock_bh(&mpath->state_lock); + if (mpath->flags & MESH_PATH_DELETED) { + spin_unlock_bh(&mpath->state_lock); + goto enddiscovery; + } mod_timer(&mpath->timer, jiffies + mpath->discovery_timeout); + spin_unlock_bh(&mpath->state_lock); enddiscovery: rcu_read_unlock(); diff --git a/net/mac80211/mesh_plink.c b/net/mac80211/mesh_plink.c index 737c5f4dbf52..798e4b6b383f 100644 --- a/net/mac80211/mesh_plink.c +++ b/net/mac80211/mesh_plink.c @@ -238,6 +238,8 @@ static int mesh_plink_frame_tx(struct ieee80211_sub_if_data *sdata, 2 + sizeof(struct ieee80211_vht_operation) + ie_len_he_cap + 2 + 1 + sizeof(struct ieee80211_he_operation) + + sizeof(struct ieee80211_he_6ghz_oper) + + 2 + 1 + sizeof(struct ieee80211_he_6ghz_capa) + 2 + 8 + /* peering IE */ sdata->u.mesh.ie_len); if (!skb) @@ -328,7 +330,8 @@ static int mesh_plink_frame_tx(struct ieee80211_sub_if_data *sdata, mesh_add_vht_cap_ie(sdata, skb) || mesh_add_vht_oper_ie(sdata, skb) || mesh_add_he_cap_ie(sdata, skb, ie_len_he_cap) || - mesh_add_he_oper_ie(sdata, skb)) + mesh_add_he_oper_ie(sdata, skb) || + mesh_add_he_6ghz_cap_ie(sdata, skb)) goto free; } @@ -441,7 +444,9 @@ static void mesh_sta_info_init(struct ieee80211_sub_if_data *sdata, elems->vht_cap_elem, sta); ieee80211_he_cap_ie_to_sta_he_cap(sdata, sband, elems->he_cap, - elems->he_cap_len, sta); + elems->he_cap_len, + elems->he_6ghz_capa, + sta); if (bw != sta->sta.bandwidth) changed |= IEEE80211_RC_BW_CHANGED; diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index a259b4487b60..5820ef02a587 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -145,6 +145,7 @@ static u32 ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata, struct ieee80211_supported_band *sband, struct ieee80211_channel *channel, + u32 vht_cap_info, const struct ieee80211_ht_operation *ht_oper, const struct ieee80211_vht_operation *vht_oper, const struct ieee80211_he_operation *he_oper, @@ -155,15 +156,24 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata, struct ieee80211_sta_ht_cap sta_ht_cap; u32 ht_cfreq, ret; - memcpy(&sta_ht_cap, &sband->ht_cap, sizeof(sta_ht_cap)); - ieee80211_apply_htcap_overrides(sdata, &sta_ht_cap); - memset(chandef, 0, sizeof(struct cfg80211_chan_def)); chandef->chan = channel; chandef->width = NL80211_CHAN_WIDTH_20_NOHT; chandef->center_freq1 = channel->center_freq; chandef->freq1_offset = channel->freq_offset; + if (channel->band == NL80211_BAND_6GHZ) { + if (!ieee80211_chandef_he_6ghz_oper(sdata, he_oper, chandef)) + ret = IEEE80211_STA_DISABLE_HT | + IEEE80211_STA_DISABLE_VHT | + IEEE80211_STA_DISABLE_HE; + vht_chandef = *chandef; + goto out; + } + + memcpy(&sta_ht_cap, &sband->ht_cap, sizeof(sta_ht_cap)); + ieee80211_apply_htcap_overrides(sdata, &sta_ht_cap); + if (!ht_oper || !sta_ht_cap.ht_supported) { ret = IEEE80211_STA_DISABLE_HT | IEEE80211_STA_DISABLE_VHT | @@ -223,7 +233,7 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata, memcpy(&he_oper_vht_cap, he_oper->optional, 3); he_oper_vht_cap.basic_mcs_set = cpu_to_le16(0); - if (!ieee80211_chandef_vht_oper(&sdata->local->hw, + if (!ieee80211_chandef_vht_oper(&sdata->local->hw, vht_cap_info, &he_oper_vht_cap, ht_oper, &vht_chandef)) { if (!(ifmgd->flags & IEEE80211_STA_DISABLE_HE)) @@ -232,8 +242,10 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata, ret = IEEE80211_STA_DISABLE_HE; goto out; } - } else if (!ieee80211_chandef_vht_oper(&sdata->local->hw, vht_oper, - ht_oper, &vht_chandef)) { + } else if (!ieee80211_chandef_vht_oper(&sdata->local->hw, + vht_cap_info, + vht_oper, ht_oper, + &vht_chandef)) { if (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT)) sdata_info(sdata, "AP VHT information is invalid, disable VHT\n"); @@ -329,6 +341,7 @@ out: static int ieee80211_config_bw(struct ieee80211_sub_if_data *sdata, struct sta_info *sta, const struct ieee80211_ht_cap *ht_cap, + const struct ieee80211_vht_cap *vht_cap, const struct ieee80211_ht_operation *ht_oper, const struct ieee80211_vht_operation *vht_oper, const struct ieee80211_he_operation *he_oper, @@ -343,6 +356,7 @@ static int ieee80211_config_bw(struct ieee80211_sub_if_data *sdata, u16 ht_opmode; u32 flags; enum ieee80211_sta_rx_bandwidth new_sta_bw; + u32 vht_cap_info = 0; int ret; /* if HT was/is disabled, don't track any bandwidth changes */ @@ -371,8 +385,11 @@ static int ieee80211_config_bw(struct ieee80211_sub_if_data *sdata, sdata->vif.bss_conf.ht_operation_mode = ht_opmode; } + if (vht_cap) + vht_cap_info = le32_to_cpu(vht_cap->vht_cap_info); + /* calculate new channel (type) based on HT/VHT/HE operation IEs */ - flags = ieee80211_determine_chantype(sdata, sband, chan, + flags = ieee80211_determine_chantype(sdata, sband, chan, vht_cap_info, ht_oper, vht_oper, he_oper, &chandef, true); @@ -658,6 +675,8 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata, he_cap->he_cap_elem.phy_cap_info); pos = skb_put(skb, he_cap_size); ieee80211_ie_build_he_cap(pos, he_cap, pos + he_cap_size); + + ieee80211_ie_build_he_6ghz_cap(sdata, skb); } static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata) @@ -731,6 +750,7 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata) 2 + 1 + sizeof(struct ieee80211_he_cap_elem) + /* HE */ sizeof(struct ieee80211_he_mcs_nss_supp) + IEEE80211_HE_PPE_THRES_MAX_LEN + + 2 + 1 + sizeof(struct ieee80211_he_6ghz_capa) + assoc_data->ie_len + /* extra IEs */ (assoc_data->fils_kek_len ? 16 /* AES-SIV */ : 0) + 9, /* WMM */ @@ -903,7 +923,8 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata) !(ifmgd->flags & IEEE80211_STA_DISABLE_VHT))) ifmgd->flags |= IEEE80211_STA_DISABLE_VHT; - if (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT)) + if (sband->band != NL80211_BAND_6GHZ && + !(ifmgd->flags & IEEE80211_STA_DISABLE_HT)) ieee80211_add_ht_ie(sdata, skb, assoc_data->ap_ht_param, sband, chan, sdata->smps_mode); @@ -957,7 +978,8 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata) offset = noffset; } - if (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT)) + if (sband->band != NL80211_BAND_6GHZ && + !(ifmgd->flags & IEEE80211_STA_DISABLE_VHT)) ieee80211_add_vht_ie(sdata, skb, sband, &assoc_data->ap_vht_cap); @@ -1324,6 +1346,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata, enum nl80211_band current_band; struct ieee80211_csa_ie csa_ie; struct ieee80211_channel_switch ch_switch; + struct ieee80211_bss *bss; int res; sdata_assert_lock(sdata); @@ -1335,7 +1358,9 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata, return; current_band = cbss->channel->band; + bss = (void *)cbss->priv; res = ieee80211_parse_ch_switch_ie(sdata, elems, current_band, + bss->vht_cap_info, ifmgd->flags, ifmgd->associated->bssid, &csa_ie); @@ -1508,6 +1533,7 @@ ieee80211_find_80211h_pwr_constr(struct ieee80211_sub_if_data *sdata, chan_increment = 1; break; case NL80211_BAND_5GHZ: + case NL80211_BAND_6GHZ: chan_increment = 4; break; } @@ -2145,7 +2171,8 @@ static u32 ieee80211_handle_bss_capability(struct ieee80211_sub_if_data *sdata, } use_short_slot = !!(capab & WLAN_CAPABILITY_SHORT_SLOT_TIME); - if (sband->band == NL80211_BAND_5GHZ) + if (sband->band == NL80211_BAND_5GHZ || + sband->band == NL80211_BAND_6GHZ) use_short_slot = true; if (use_protection != bss_conf->use_cts_prot) { @@ -3234,6 +3261,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, struct ieee80211_bss_conf *bss_conf = &sdata->vif.bss_conf; const struct cfg80211_bss_ies *bss_ies = NULL; struct ieee80211_mgd_assoc_data *assoc_data = ifmgd->assoc_data; + bool is_6ghz = cbss->channel->band == NL80211_BAND_6GHZ; u32 changed = 0; int err; bool ret; @@ -3275,11 +3303,12 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, * 2G/3G/4G wifi routers, reported models include the "Onda PN51T", * "Vodafone PocketWiFi 2", "ZTE MF60" and a similar T-Mobile device. */ - if ((assoc_data->wmm && !elems->wmm_param) || - (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT) && - (!elems->ht_cap_elem || !elems->ht_operation)) || - (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT) && - (!elems->vht_cap_elem || !elems->vht_operation))) { + if (!is_6ghz && + ((assoc_data->wmm && !elems->wmm_param) || + (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT) && + (!elems->ht_cap_elem || !elems->ht_operation)) || + (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT) && + (!elems->vht_cap_elem || !elems->vht_operation)))) { const struct cfg80211_bss_ies *ies; struct ieee802_11_elems bss_elems; @@ -3337,7 +3366,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, * We previously checked these in the beacon/probe response, so * they should be present here. This is just a safety net. */ - if (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT) && + if (!is_6ghz && !(ifmgd->flags & IEEE80211_STA_DISABLE_HT) && (!elems->wmm_param || !elems->ht_cap_elem || !elems->ht_operation)) { sdata_info(sdata, "HT AP is missing WMM params or HT capability/operation\n"); @@ -3345,7 +3374,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, goto out; } - if (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT) && + if (!is_6ghz && !(ifmgd->flags & IEEE80211_STA_DISABLE_VHT) && (!elems->vht_cap_elem || !elems->vht_operation)) { sdata_info(sdata, "VHT AP is missing VHT capability/operation\n"); @@ -3353,6 +3382,14 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, goto out; } + if (is_6ghz && !(ifmgd->flags & IEEE80211_STA_DISABLE_HE) && + !elems->he_6ghz_capa) { + sdata_info(sdata, + "HE 6 GHz AP is missing HE 6 GHz band capability\n"); + ret = false; + goto out; + } + mutex_lock(&sdata->local->sta_mtx); /* * station info was already allocated and inserted before @@ -3395,6 +3432,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, ieee80211_he_cap_ie_to_sta_he_cap(sdata, sband, elems->he_cap, elems->he_cap_len, + elems->he_6ghz_capa, sta); bss_conf->he_support = sta->sta.he_cap.has_he; @@ -4094,8 +4132,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, changed |= ieee80211_recalc_twt_req(sdata, sta, &elems); - if (ieee80211_config_bw(sdata, sta, - elems.ht_cap_elem, elems.ht_operation, + if (ieee80211_config_bw(sdata, sta, elems.ht_cap_elem, + elems.vht_cap_elem, elems.ht_operation, elems.vht_operation, elems.he_operation, bssid, &changed)) { mutex_unlock(&local->sta_mtx); @@ -4812,6 +4850,8 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata, const struct ieee80211_he_operation *he_oper = NULL; struct ieee80211_supported_band *sband; struct cfg80211_chan_def chandef; + bool is_6ghz = cbss->channel->band == NL80211_BAND_6GHZ; + struct ieee80211_bss *bss = (void *)cbss->priv; int ret; u32 i; bool have_80mhz; @@ -4823,21 +4863,23 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata, IEEE80211_STA_DISABLE_160MHZ); /* disable HT/VHT/HE if we don't support them */ - if (!sband->ht_cap.ht_supported) { + if (!sband->ht_cap.ht_supported && !is_6ghz) { ifmgd->flags |= IEEE80211_STA_DISABLE_HT; ifmgd->flags |= IEEE80211_STA_DISABLE_VHT; ifmgd->flags |= IEEE80211_STA_DISABLE_HE; } - if (!sband->vht_cap.vht_supported) + if (!sband->vht_cap.vht_supported && !is_6ghz) { ifmgd->flags |= IEEE80211_STA_DISABLE_VHT; + ifmgd->flags |= IEEE80211_STA_DISABLE_HE; + } if (!ieee80211_get_he_sta_cap(sband)) ifmgd->flags |= IEEE80211_STA_DISABLE_HE; rcu_read_lock(); - if (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT)) { + if (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT) && !is_6ghz) { const u8 *ht_oper_ie, *ht_cap_ie; ht_oper_ie = ieee80211_bss_get_ie(cbss, WLAN_EID_HT_OPERATION); @@ -4854,7 +4896,7 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata, } } - if (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT)) { + if (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT) && !is_6ghz) { const u8 *vht_oper_ie, *vht_cap; vht_oper_ie = ieee80211_bss_get_ie(cbss, @@ -4910,6 +4952,7 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata, ifmgd->flags |= ieee80211_determine_chantype(sdata, sband, cbss->channel, + bss->vht_cap_info, ht_oper, vht_oper, he_oper, &chandef, false); @@ -4918,6 +4961,11 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata, rcu_read_unlock(); + if (ifmgd->flags & IEEE80211_STA_DISABLE_HE && is_6ghz) { + sdata_info(sdata, "Rejecting non-HE 6/7 GHz connection"); + return -EINVAL; + } + /* will change later if needed */ sdata->smps_mode = IEEE80211_SMPS_OFF; @@ -5299,6 +5347,7 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata, int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, struct cfg80211_assoc_request *req) { + bool is_6ghz = req->bss->channel->band == NL80211_BAND_6GHZ; struct ieee80211_local *local = sdata->local; struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; struct ieee80211_bss *bss = (void *)req->bss->priv; @@ -5441,14 +5490,15 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, if (ht_ie && ht_ie[1] >= sizeof(struct ieee80211_ht_operation)) assoc_data->ap_ht_param = ((struct ieee80211_ht_operation *)(ht_ie + 2))->ht_param; - else + else if (!is_6ghz) ifmgd->flags |= IEEE80211_STA_DISABLE_HT; vht_ie = ieee80211_bss_get_ie(req->bss, WLAN_EID_VHT_CAPABILITY); if (vht_ie && vht_ie[1] >= sizeof(struct ieee80211_vht_cap)) memcpy(&assoc_data->ap_vht_cap, vht_ie + 2, sizeof(struct ieee80211_vht_cap)); - else - ifmgd->flags |= IEEE80211_STA_DISABLE_VHT; + else if (!is_6ghz) + ifmgd->flags |= IEEE80211_STA_DISABLE_VHT | + IEEE80211_STA_DISABLE_HE; rcu_read_unlock(); if (WARN((sdata->vif.driver_flags & IEEE80211_VIF_SUPPORTS_UAPSD) && @@ -5549,7 +5599,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, assoc_data->timeout_started = true; assoc_data->need_beacon = true; } else if (beacon_ies) { - const u8 *ie; + const struct element *elem; u8 dtim_count = 0; ieee80211_get_dtim(beacon_ies, &dtim_count, @@ -5566,15 +5616,15 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, sdata->vif.bss_conf.sync_dtim_count = dtim_count; } - ie = cfg80211_find_ext_ie(WLAN_EID_EXT_MULTIPLE_BSSID_CONFIGURATION, - beacon_ies->data, beacon_ies->len); - if (ie && ie[1] >= 3) - sdata->vif.bss_conf.profile_periodicity = ie[4]; + elem = cfg80211_find_ext_elem(WLAN_EID_EXT_MULTIPLE_BSSID_CONFIGURATION, + beacon_ies->data, beacon_ies->len); + if (elem && elem->datalen >= 3) + sdata->vif.bss_conf.profile_periodicity = elem->data[2]; - ie = cfg80211_find_ie(WLAN_EID_EXT_CAPABILITY, - beacon_ies->data, beacon_ies->len); - if (ie && ie[1] >= 11 && - (ie[10] & WLAN_EXT_CAPA11_EMA_SUPPORT)) + elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, + beacon_ies->data, beacon_ies->len); + if (elem && elem->datalen >= 11 && + (elem->data[10] & WLAN_EXT_CAPA11_EMA_SUPPORT)) sdata->vif.bss_conf.ema_ap = true; } else { assoc_data->timeout = jiffies; diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index eaf8931e4627..21854a61a2b7 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -93,13 +93,44 @@ static u8 *ieee80211_get_bssid(struct ieee80211_hdr *hdr, size_t len, * This function cleans up the SKB, i.e. it removes all the stuff * only useful for monitoring. */ -static void remove_monitor_info(struct sk_buff *skb, - unsigned int present_fcs_len, - unsigned int rtap_space) +static struct sk_buff *ieee80211_clean_skb(struct sk_buff *skb, + unsigned int present_fcs_len, + unsigned int rtap_space) { + struct ieee80211_hdr *hdr; + unsigned int hdrlen; + __le16 fc; + if (present_fcs_len) __pskb_trim(skb, skb->len - present_fcs_len); __pskb_pull(skb, rtap_space); + + hdr = (void *)skb->data; + fc = hdr->frame_control; + + /* + * Remove the HT-Control field (if present) on management + * frames after we've sent the frame to monitoring. We + * (currently) don't need it, and don't properly parse + * frames with it present, due to the assumption of a + * fixed management header length. + */ + if (likely(!ieee80211_is_mgmt(fc) || !ieee80211_has_order(fc))) + return skb; + + hdrlen = ieee80211_hdrlen(fc); + hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_ORDER); + + if (!pskb_may_pull(skb, hdrlen)) { + dev_kfree_skb(skb); + return NULL; + } + + memmove(skb->data + IEEE80211_HT_CTL_LEN, skb->data, + hdrlen - IEEE80211_HT_CTL_LEN); + __pskb_pull(skb, IEEE80211_HT_CTL_LEN); + + return skb; } static inline bool should_drop_frame(struct sk_buff *skb, int present_fcs_len, @@ -827,8 +858,8 @@ ieee80211_rx_monitor(struct ieee80211_local *local, struct sk_buff *origskb, return NULL; } - remove_monitor_info(origskb, present_fcs_len, rtap_space); - return origskb; + return ieee80211_clean_skb(origskb, present_fcs_len, + rtap_space); } ieee80211_handle_mu_mimo_mon(monitor_sdata, origskb, rtap_space); @@ -871,8 +902,7 @@ ieee80211_rx_monitor(struct ieee80211_local *local, struct sk_buff *origskb, if (!origskb) return NULL; - remove_monitor_info(origskb, present_fcs_len, rtap_space); - return origskb; + return ieee80211_clean_skb(origskb, present_fcs_len, rtap_space); } static void ieee80211_parse_qos(struct ieee80211_rx_data *rx) @@ -3095,9 +3125,10 @@ ieee80211_rx_h_mgmt_check(struct ieee80211_rx_data *rx) !(status->flag & RX_FLAG_NO_SIGNAL_VAL)) sig = status->signal; - cfg80211_report_obss_beacon(rx->local->hw.wiphy, - rx->skb->data, rx->skb->len, - status->freq, sig); + cfg80211_report_obss_beacon_khz(rx->local->hw.wiphy, + rx->skb->data, rx->skb->len, + ieee80211_rx_status_to_khz(status), + sig); rx->flags |= IEEE80211_RX_BEACON_REPORTED; } @@ -3353,19 +3384,6 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx) } } break; - case WLAN_CATEGORY_SA_QUERY: - if (len < (IEEE80211_MIN_ACTION_SIZE + - sizeof(mgmt->u.action.u.sa_query))) - break; - - switch (mgmt->u.action.u.sa_query.action) { - case WLAN_ACTION_SA_QUERY_REQUEST: - if (sdata->vif.type != NL80211_IFTYPE_STATION) - break; - ieee80211_process_sa_query_req(sdata, mgmt, len); - goto handled; - } - break; case WLAN_CATEGORY_SELF_PROTECTED: if (len < (IEEE80211_MIN_ACTION_SIZE + sizeof(mgmt->u.action.u.self_prot.action_code))) @@ -3443,8 +3461,9 @@ ieee80211_rx_h_userspace_mgmt(struct ieee80211_rx_data *rx) !(status->flag & RX_FLAG_NO_SIGNAL_VAL)) sig = status->signal; - if (cfg80211_rx_mgmt(&rx->sdata->wdev, status->freq, sig, - rx->skb->data, rx->skb->len, 0)) { + if (cfg80211_rx_mgmt_khz(&rx->sdata->wdev, + ieee80211_rx_status_to_khz(status), sig, + rx->skb->data, rx->skb->len, 0)) { if (rx->sta) rx->sta->rx_stats.packets++; dev_kfree_skb(rx->skb); @@ -3455,6 +3474,41 @@ ieee80211_rx_h_userspace_mgmt(struct ieee80211_rx_data *rx) } static ieee80211_rx_result debug_noinline +ieee80211_rx_h_action_post_userspace(struct ieee80211_rx_data *rx) +{ + struct ieee80211_sub_if_data *sdata = rx->sdata; + struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *) rx->skb->data; + int len = rx->skb->len; + + if (!ieee80211_is_action(mgmt->frame_control)) + return RX_CONTINUE; + + switch (mgmt->u.action.category) { + case WLAN_CATEGORY_SA_QUERY: + if (len < (IEEE80211_MIN_ACTION_SIZE + + sizeof(mgmt->u.action.u.sa_query))) + break; + + switch (mgmt->u.action.u.sa_query.action) { + case WLAN_ACTION_SA_QUERY_REQUEST: + if (sdata->vif.type != NL80211_IFTYPE_STATION) + break; + ieee80211_process_sa_query_req(sdata, mgmt, len); + goto handled; + } + break; + } + + return RX_CONTINUE; + + handled: + if (rx->sta) + rx->sta->rx_stats.packets++; + dev_kfree_skb(rx->skb); + return RX_QUEUED; +} + +static ieee80211_rx_result debug_noinline ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx) { struct ieee80211_local *local = rx->local; @@ -3734,6 +3788,7 @@ static void ieee80211_rx_handlers(struct ieee80211_rx_data *rx, CALL_RXH(ieee80211_rx_h_mgmt_check); CALL_RXH(ieee80211_rx_h_action); CALL_RXH(ieee80211_rx_h_userspace_mgmt); + CALL_RXH(ieee80211_rx_h_action_post_userspace); CALL_RXH(ieee80211_rx_h_action_return); CALL_RXH(ieee80211_rx_h_mgmt); diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c index 5db15996524f..ad90bbe57457 100644 --- a/net/mac80211/scan.c +++ b/net/mac80211/scan.c @@ -132,6 +132,12 @@ ieee80211_update_bss_from_elems(struct ieee80211_local *local, bss->beacon_rate = &sband->bitrates[rx_status->rate_idx]; } + + if (elems->vht_cap_elem) + bss->vht_cap_info = + le32_to_cpu(elems->vht_cap_elem->vht_cap_info); + else + bss->vht_cap_info = 0; } struct ieee80211_bss * @@ -307,8 +313,9 @@ ieee80211_prepare_scan_chandef(struct cfg80211_chan_def *chandef, } /* return false if no more work */ -static bool ieee80211_prep_hw_scan(struct ieee80211_local *local) +static bool ieee80211_prep_hw_scan(struct ieee80211_sub_if_data *sdata) { + struct ieee80211_local *local = sdata->local; struct cfg80211_scan_request *req; struct cfg80211_chan_def chandef; u8 bands_used = 0; @@ -355,7 +362,7 @@ static bool ieee80211_prep_hw_scan(struct ieee80211_local *local) if (req->flags & NL80211_SCAN_FLAG_MIN_PREQ_CONTENT) flags |= IEEE80211_PROBE_FLAG_MIN_CONTENT; - ielen = ieee80211_build_preq_ies(local, + ielen = ieee80211_build_preq_ies(sdata, (u8 *)local->hw_scan_req->req.ie, local->hw_scan_ies_bufsize, &local->hw_scan_req->ies, @@ -395,9 +402,12 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted) if (WARN_ON(!local->scan_req)) return; + scan_sdata = rcu_dereference_protected(local->scan_sdata, + lockdep_is_held(&local->mtx)); + if (hw_scan && !aborted && !ieee80211_hw_check(&local->hw, SINGLE_SCAN_ON_ALL_BANDS) && - ieee80211_prep_hw_scan(local)) { + ieee80211_prep_hw_scan(scan_sdata)) { int rc; rc = drv_hw_scan(local, @@ -426,9 +436,6 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted) cfg80211_scan_done(scan_req, &local->scan_info); } RCU_INIT_POINTER(local->scan_req, NULL); - - scan_sdata = rcu_dereference_protected(local->scan_sdata, - lockdep_is_held(&local->mtx)); RCU_INIT_POINTER(local->scan_sdata, NULL); local->scanning = 0; @@ -770,7 +777,7 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata, ieee80211_recalc_idle(local); if (hw_scan) { - WARN_ON(!ieee80211_prep_hw_scan(local)); + WARN_ON(!ieee80211_prep_hw_scan(sdata)); rc = drv_hw_scan(local, sdata, local->hw_scan_req); } else { rc = ieee80211_start_sw_scan(local, sdata); @@ -1268,7 +1275,7 @@ int __ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata, ieee80211_prepare_scan_chandef(&chandef, req->scan_width); - ieee80211_build_preq_ies(local, ie, num_bands * iebufsz, + ieee80211_build_preq_ies(sdata, ie, num_bands * iebufsz, &sched_scan_ies, req->ie, req->ie_len, bands_used, rate_masks, &chandef, flags); diff --git a/net/mac80211/spectmgmt.c b/net/mac80211/spectmgmt.c index 5fe2b645912f..ae1cb2c68722 100644 --- a/net/mac80211/spectmgmt.c +++ b/net/mac80211/spectmgmt.c @@ -9,7 +9,7 @@ * Copyright 2007, Michael Wu <flamingice@sourmilk.net> * Copyright 2007-2008, Intel Corporation * Copyright 2008, Johannes Berg <johannes@sipsolutions.net> - * Copyright (C) 2018 Intel Corporation + * Copyright (C) 2018, 2020 Intel Corporation */ #include <linux/ieee80211.h> @@ -22,6 +22,7 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata, struct ieee802_11_elems *elems, enum nl80211_band current_band, + u32 vht_cap_info, u32 sta_flags, u8 *bssid, struct ieee80211_csa_ie *csa_ie) { @@ -150,6 +151,7 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata, /* ignore if parsing fails */ if (!ieee80211_chandef_vht_oper(&sdata->local->hw, + vht_cap_info, &vht_oper, &ht_oper, &new_vht_chandef)) new_vht_chandef.chan = NULL; diff --git a/net/mac80211/status.c b/net/mac80211/status.c index 22512805eafb..7b1bacac39c6 100644 --- a/net/mac80211/status.c +++ b/net/mac80211/status.c @@ -649,10 +649,17 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local, info->status.ack_signal, info->status.is_valid_ack_signal, GFP_ATOMIC); - else + else if (ieee80211_is_mgmt(hdr->frame_control)) cfg80211_mgmt_tx_status(&sdata->wdev, cookie, skb->data, skb->len, acked, GFP_ATOMIC); + else + cfg80211_control_port_tx_status(&sdata->wdev, + cookie, + skb->data, + skb->len, + acked, + GFP_ATOMIC); } rcu_read_unlock(); diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c index 8ad420db3766..4b0cff4a07bd 100644 --- a/net/mac80211/tdls.c +++ b/net/mac80211/tdls.c @@ -1054,7 +1054,7 @@ ieee80211_tdls_prep_mgmt_packet(struct wiphy *wiphy, struct net_device *dev, /* disable bottom halves when entering the Tx path */ local_bh_disable(); - __ieee80211_subif_start_xmit(skb, dev, flags, 0); + __ieee80211_subif_start_xmit(skb, dev, flags, 0, NULL); local_bh_enable(); return ret; diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c index 47f460c8bd74..e9ce658141f5 100644 --- a/net/mac80211/tx.c +++ b/net/mac80211/tx.c @@ -2436,13 +2436,19 @@ int ieee80211_lookup_ra_sta(struct ieee80211_sub_if_data *sdata, return 0; } -static int ieee80211_store_ack_skb(struct ieee80211_local *local, +static u16 ieee80211_store_ack_skb(struct ieee80211_local *local, struct sk_buff *skb, - u32 *info_flags) + u32 *info_flags, + u64 *cookie) { - struct sk_buff *ack_skb = skb_clone_sk(skb); + struct sk_buff *ack_skb; u16 info_id = 0; + if (skb->sk) + ack_skb = skb_clone_sk(skb); + else + ack_skb = skb_clone(skb, GFP_ATOMIC); + if (ack_skb) { unsigned long flags; int id; @@ -2455,6 +2461,10 @@ static int ieee80211_store_ack_skb(struct ieee80211_local *local, if (id >= 0) { info_id = id; *info_flags |= IEEE80211_TX_CTL_REQ_TX_STATUS; + if (cookie) { + *cookie = ieee80211_mgmt_tx_cookie(local); + IEEE80211_SKB_CB(ack_skb)->ack.cookie = *cookie; + } } else { kfree_skb(ack_skb); } @@ -2484,7 +2494,8 @@ static int ieee80211_store_ack_skb(struct ieee80211_local *local, */ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb, u32 info_flags, - struct sta_info *sta, u32 ctrl_flags) + struct sta_info *sta, u32 ctrl_flags, + u64 *cookie) { struct ieee80211_local *local = sdata->local; struct ieee80211_tx_info *info; @@ -2755,9 +2766,11 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata, goto free; } - if (unlikely(!multicast && skb->sk && - skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS)) - info_id = ieee80211_store_ack_skb(local, skb, &info_flags); + if (unlikely(!multicast && ((skb->sk && + skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS) || + ctrl_flags & IEEE80211_TX_CTL_REQ_TX_STATUS))) + info_id = ieee80211_store_ack_skb(local, skb, &info_flags, + cookie); /* * If the skb is shared we need to obtain our own copy. @@ -3913,7 +3926,8 @@ EXPORT_SYMBOL(ieee80211_txq_schedule_start); void __ieee80211_subif_start_xmit(struct sk_buff *skb, struct net_device *dev, u32 info_flags, - u32 ctrl_flags) + u32 ctrl_flags, + u64 *cookie) { struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); struct ieee80211_local *local = sdata->local; @@ -3983,7 +3997,7 @@ void __ieee80211_subif_start_xmit(struct sk_buff *skb, skb_mark_not_on_list(skb); skb = ieee80211_build_hdr(sdata, skb, info_flags, - sta, ctrl_flags); + sta, ctrl_flags, cookie); if (IS_ERR(skb)) { kfree_skb_list(next); goto out; @@ -4125,9 +4139,9 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb, __skb_queue_head_init(&queue); ieee80211_convert_to_unicast(skb, dev, &queue); while ((skb = __skb_dequeue(&queue))) - __ieee80211_subif_start_xmit(skb, dev, 0, 0); + __ieee80211_subif_start_xmit(skb, dev, 0, 0, NULL); } else { - __ieee80211_subif_start_xmit(skb, dev, 0, 0); + __ieee80211_subif_start_xmit(skb, dev, 0, 0, NULL); } return NETDEV_TX_OK; @@ -4215,7 +4229,7 @@ static void ieee80211_8023_xmit(struct ieee80211_sub_if_data *sdata, if (unlikely(!multicast && skb->sk && skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS)) - ieee80211_store_ack_skb(local, skb, &info->flags); + ieee80211_store_ack_skb(local, skb, &info->flags, NULL); memset(info, 0, sizeof(*info)); @@ -4299,7 +4313,7 @@ ieee80211_build_data_template(struct ieee80211_sub_if_data *sdata, goto out; } - skb = ieee80211_build_hdr(sdata, skb, info_flags, sta, 0); + skb = ieee80211_build_hdr(sdata, skb, info_flags, sta, 0, NULL); if (IS_ERR(skb)) goto out; @@ -5339,14 +5353,15 @@ void __ieee80211_tx_skb_tid_band(struct ieee80211_sub_if_data *sdata, int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev, const u8 *buf, size_t len, - const u8 *dest, __be16 proto, bool unencrypted) + const u8 *dest, __be16 proto, bool unencrypted, + u64 *cookie) { struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); struct ieee80211_local *local = sdata->local; struct sk_buff *skb; struct ethhdr *ehdr; u32 ctrl_flags = 0; - u32 flags; + u32 flags = 0; /* Only accept CONTROL_PORT_PROTOCOL configured in CONNECT/ASSOCIATE * or Pre-Authentication @@ -5359,9 +5374,13 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev, ctrl_flags |= IEEE80211_TX_CTRL_PORT_CTRL_PROTO; if (unencrypted) - flags = IEEE80211_TX_INTFL_DONT_ENCRYPT; - else - flags = 0; + flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT; + + if (cookie) + ctrl_flags |= IEEE80211_TX_CTL_REQ_TX_STATUS; + + flags |= IEEE80211_TX_INTFL_NL80211_FRAME_TX | + IEEE80211_TX_CTL_INJECTED; skb = dev_alloc_skb(local->hw.extra_tx_headroom + sizeof(struct ethhdr) + len); @@ -5382,10 +5401,15 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev, skb_reset_network_header(skb); skb_reset_mac_header(skb); + /* mutex lock is only needed for incrementing the cookie counter */ + mutex_lock(&local->mtx); + local_bh_disable(); - __ieee80211_subif_start_xmit(skb, skb->dev, flags, ctrl_flags); + __ieee80211_subif_start_xmit(skb, skb->dev, flags, ctrl_flags, cookie); local_bh_enable(); + mutex_unlock(&local->mtx); + return 0; } @@ -5412,7 +5436,8 @@ int ieee80211_probe_mesh_link(struct wiphy *wiphy, struct net_device *dev, local_bh_disable(); __ieee80211_subif_start_xmit(skb, skb->dev, 0, - IEEE80211_TX_CTRL_SKIP_MPATH_LOOKUP); + IEEE80211_TX_CTRL_SKIP_MPATH_LOOKUP, + NULL); local_bh_enable(); return 0; diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 20436c86b9bf..21c94094a699 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -936,6 +936,10 @@ static void ieee80211_parse_extension_element(u32 *crc, len >= ieee80211_he_spr_size(data)) elems->he_spr = data; break; + case WLAN_EID_EXT_HE_6GHZ_CAPA: + if (len == sizeof(*elems->he_6ghz_capa)) + elems->he_6ghz_capa = data; + break; } } @@ -1659,7 +1663,20 @@ void ieee80211_send_deauth_disassoc(struct ieee80211_sub_if_data *sdata, } } -static int ieee80211_build_preq_ies_band(struct ieee80211_local *local, +static u8 *ieee80211_write_he_6ghz_cap(u8 *pos, __le16 cap, u8 *end) +{ + if ((end - pos) < 5) + return pos; + + *pos++ = WLAN_EID_EXTENSION; + *pos++ = 1 + sizeof(cap); + *pos++ = WLAN_EID_EXT_HE_6GHZ_CAPA; + memcpy(pos, &cap, sizeof(cap)); + + return pos + 2; +} + +static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata, u8 *buffer, size_t buffer_len, const u8 *ie, size_t ie_len, enum nl80211_band band, @@ -1667,6 +1684,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_local *local, struct cfg80211_chan_def *chandef, size_t *offset, u32 flags) { + struct ieee80211_local *local = sdata->local; struct ieee80211_supported_band *sband; const struct ieee80211_sta_he_cap *he_cap; u8 *pos = buffer, *end = buffer + buffer_len; @@ -1844,6 +1862,14 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_local *local, pos = ieee80211_ie_build_he_cap(pos, he_cap, end); if (!pos) goto out_err; + + if (sband->band == NL80211_BAND_6GHZ) { + enum nl80211_iftype iftype = + ieee80211_vif_type_p2p(&sdata->vif); + __le16 cap = ieee80211_get_he_6ghz_capa(sband, iftype); + + pos = ieee80211_write_he_6ghz_cap(pos, cap, end); + } } /* @@ -1858,7 +1884,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_local *local, return pos - buffer; } -int ieee80211_build_preq_ies(struct ieee80211_local *local, u8 *buffer, +int ieee80211_build_preq_ies(struct ieee80211_sub_if_data *sdata, u8 *buffer, size_t buffer_len, struct ieee80211_scan_ies *ie_desc, const u8 *ie, size_t ie_len, @@ -1873,7 +1899,7 @@ int ieee80211_build_preq_ies(struct ieee80211_local *local, u8 *buffer, for (i = 0; i < NUM_NL80211_BANDS; i++) { if (bands_used & BIT(i)) { - pos += ieee80211_build_preq_ies_band(local, + pos += ieee80211_build_preq_ies_band(sdata, buffer + pos, buffer_len - pos, ie, ie_len, i, @@ -1935,7 +1961,7 @@ struct sk_buff *ieee80211_build_probe_req(struct ieee80211_sub_if_data *sdata, return NULL; rate_masks[chan->band] = ratemask; - ies_len = ieee80211_build_preq_ies(local, skb_tail_pointer(skb), + ies_len = ieee80211_build_preq_ies(sdata, skb_tail_pointer(skb), skb_tailroom(skb), &dummy_ie_desc, ie, ie_len, BIT(chan->band), rate_masks, &chandef, flags); @@ -2835,6 +2861,50 @@ end: return pos; } +void ieee80211_ie_build_he_6ghz_cap(struct ieee80211_sub_if_data *sdata, + struct sk_buff *skb) +{ + struct ieee80211_supported_band *sband; + const struct ieee80211_sband_iftype_data *iftd; + enum nl80211_iftype iftype = ieee80211_vif_type_p2p(&sdata->vif); + u8 *pos; + u16 cap; + + sband = ieee80211_get_sband(sdata); + if (!sband) + return; + + iftd = ieee80211_get_sband_iftype_data(sband, iftype); + if (WARN_ON(!iftd)) + return; + + cap = le16_to_cpu(iftd->he_6ghz_capa.capa); + cap &= ~IEEE80211_HE_6GHZ_CAP_SM_PS; + + switch (sdata->smps_mode) { + case IEEE80211_SMPS_AUTOMATIC: + case IEEE80211_SMPS_NUM_MODES: + WARN_ON(1); + /* fall through */ + case IEEE80211_SMPS_OFF: + cap |= u16_encode_bits(WLAN_HT_CAP_SM_PS_DISABLED, + IEEE80211_HE_6GHZ_CAP_SM_PS); + break; + case IEEE80211_SMPS_STATIC: + cap |= u16_encode_bits(WLAN_HT_CAP_SM_PS_STATIC, + IEEE80211_HE_6GHZ_CAP_SM_PS); + break; + case IEEE80211_SMPS_DYNAMIC: + cap |= u16_encode_bits(WLAN_HT_CAP_SM_PS_DYNAMIC, + IEEE80211_HE_6GHZ_CAP_SM_PS); + break; + } + + pos = skb_put(skb, 2 + 1 + sizeof(cap)); + ieee80211_write_he_6ghz_cap(pos, cpu_to_le16(cap), + pos + 2 + 1 + sizeof(cap)); +} + u8 *ieee80211_ie_build_ht_oper(u8 *pos, struct ieee80211_sta_ht_cap *ht_cap, const struct cfg80211_chan_def *chandef, u16 prot_mode, bool rifs_mode) @@ -2958,13 +3028,18 @@ u8 *ieee80211_ie_build_vht_oper(u8 *pos, struct ieee80211_sta_vht_cap *vht_cap, return pos + sizeof(struct ieee80211_vht_operation); } -u8 *ieee80211_ie_build_he_oper(u8 *pos) +u8 *ieee80211_ie_build_he_oper(u8 *pos, struct cfg80211_chan_def *chandef) { struct ieee80211_he_operation *he_oper; + struct ieee80211_he_6ghz_oper *he_6ghz_op; u32 he_oper_params; + u8 ie_len = 1 + sizeof(struct ieee80211_he_operation); + + if (chandef->chan->band == NL80211_BAND_6GHZ) + ie_len += sizeof(struct ieee80211_he_6ghz_oper); *pos++ = WLAN_EID_EXTENSION; - *pos++ = 1 + sizeof(struct ieee80211_he_operation); + *pos++ = ie_len; *pos++ = WLAN_EID_EXT_HE_OPERATION; he_oper_params = 0; @@ -2974,16 +3049,68 @@ u8 *ieee80211_ie_build_he_oper(u8 *pos) IEEE80211_HE_OPERATION_ER_SU_DISABLE); he_oper_params |= u32_encode_bits(1, IEEE80211_HE_OPERATION_BSS_COLOR_DISABLED); + if (chandef->chan->band == NL80211_BAND_6GHZ) + he_oper_params |= u32_encode_bits(1, + IEEE80211_HE_OPERATION_6GHZ_OP_INFO); he_oper = (struct ieee80211_he_operation *)pos; he_oper->he_oper_params = cpu_to_le32(he_oper_params); /* don't require special HE peer rates */ he_oper->he_mcs_nss_set = cpu_to_le16(0xffff); + pos += sizeof(struct ieee80211_he_operation); - /* TODO add VHT operational and 6GHz operational subelement? */ + if (chandef->chan->band != NL80211_BAND_6GHZ) + goto out; - return pos + sizeof(struct ieee80211_vht_operation); + /* TODO add VHT operational */ + he_6ghz_op = (struct ieee80211_he_6ghz_oper *)pos; + he_6ghz_op->minrate = 6; /* 6 Mbps */ + he_6ghz_op->primary = + ieee80211_frequency_to_channel(chandef->chan->center_freq); + he_6ghz_op->ccfs0 = + ieee80211_frequency_to_channel(chandef->center_freq1); + if (chandef->center_freq2) + he_6ghz_op->ccfs1 = + ieee80211_frequency_to_channel(chandef->center_freq2); + else + he_6ghz_op->ccfs1 = 0; + + switch (chandef->width) { + case NL80211_CHAN_WIDTH_160: + /* Convert 160 MHz channel width to new style as interop + * workaround. + */ + he_6ghz_op->control = + IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_160MHZ; + he_6ghz_op->ccfs1 = he_6ghz_op->ccfs0; + if (chandef->chan->center_freq < chandef->center_freq1) + he_6ghz_op->ccfs0 -= 8; + else + he_6ghz_op->ccfs0 += 8; + fallthrough; + case NL80211_CHAN_WIDTH_80P80: + he_6ghz_op->control = + IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_160MHZ; + break; + case NL80211_CHAN_WIDTH_80: + he_6ghz_op->control = + IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_80MHZ; + break; + case NL80211_CHAN_WIDTH_40: + he_6ghz_op->control = + IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_40MHZ; + break; + default: + he_6ghz_op->control = + IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_20MHZ; + break; + } + + pos += sizeof(struct ieee80211_he_6ghz_oper); + +out: + return pos; } bool ieee80211_chandef_ht_oper(const struct ieee80211_ht_operation *ht_oper, @@ -3013,7 +3140,7 @@ bool ieee80211_chandef_ht_oper(const struct ieee80211_ht_operation *ht_oper, return true; } -bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, +bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, u32 vht_cap_info, const struct ieee80211_vht_operation *oper, const struct ieee80211_ht_operation *htop, struct cfg80211_chan_def *chandef) @@ -3025,6 +3152,10 @@ bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, u32 vht_cap; bool support_80_80 = false; bool support_160 = false; + u8 ext_nss_bw_supp = u32_get_bits(vht_cap_info, + IEEE80211_VHT_CAP_EXT_NSS_BW_MASK); + u8 supp_chwidth = u32_get_bits(vht_cap_info, + IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK); if (!oper || !htop) return false; @@ -3044,11 +3175,48 @@ bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, IEEE80211_HT_OP_MODE_CCFS2_MASK) >> IEEE80211_HT_OP_MODE_CCFS2_SHIFT; - /* when parsing (and we know how to) CCFS1 and CCFS2 are equivalent */ ccf0 = ccfs0; - ccf1 = ccfs1; - if (!ccfs1 && ieee80211_hw_check(hw, SUPPORTS_VHT_EXT_NSS_BW)) + + /* if not supported, parse as though we didn't understand it */ + if (!ieee80211_hw_check(hw, SUPPORTS_VHT_EXT_NSS_BW)) + ext_nss_bw_supp = 0; + + /* + * Cf. IEEE 802.11 Table 9-250 + * + * We really just consider that because it's inefficient to connect + * at a higher bandwidth than we'll actually be able to use. + */ + switch ((supp_chwidth << 4) | ext_nss_bw_supp) { + default: + case 0x00: + ccf1 = 0; + support_160 = false; + support_80_80 = false; + break; + case 0x01: + support_80_80 = false; + /* fall through */ + case 0x02: + case 0x03: ccf1 = ccfs2; + break; + case 0x10: + ccf1 = ccfs1; + break; + case 0x11: + case 0x12: + if (!ccfs1) + ccf1 = ccfs2; + else + ccf1 = ccfs1; + break; + case 0x13: + case 0x20: + case 0x23: + ccf1 = ccfs1; + break; + } cf0 = ieee80211_channel_to_frequency(ccf0, chandef->chan->band); cf1 = ieee80211_channel_to_frequency(ccf1, chandef->chan->band); @@ -3096,6 +3264,112 @@ bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, return true; } +bool ieee80211_chandef_he_6ghz_oper(struct ieee80211_sub_if_data *sdata, + const struct ieee80211_he_operation *he_oper, + struct cfg80211_chan_def *chandef) +{ + struct ieee80211_local *local = sdata->local; + struct ieee80211_supported_band *sband; + enum nl80211_iftype iftype = ieee80211_vif_type_p2p(&sdata->vif); + const struct ieee80211_sta_he_cap *he_cap; + struct cfg80211_chan_def he_chandef = *chandef; + const struct ieee80211_he_6ghz_oper *he_6ghz_oper; + bool support_80_80, support_160; + u8 he_phy_cap; + u32 freq; + + if (chandef->chan->band != NL80211_BAND_6GHZ) + return true; + + sband = local->hw.wiphy->bands[NL80211_BAND_6GHZ]; + + he_cap = ieee80211_get_he_iftype_cap(sband, iftype); + if (!he_cap) { + sdata_info(sdata, "Missing iftype sband data/HE cap"); + return false; + } + + he_phy_cap = he_cap->he_cap_elem.phy_cap_info[0]; + support_160 = + he_phy_cap & + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G; + support_80_80 = + he_phy_cap & + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G; + + if (!he_oper) { + sdata_info(sdata, + "HE is not advertised on (on %d MHz), expect issues\n", + chandef->chan->center_freq); + return false; + } + + he_6ghz_oper = ieee80211_he_6ghz_oper(he_oper); + + if (!he_6ghz_oper) { + sdata_info(sdata, + "HE 6GHz operation missing (on %d MHz), expect issues\n", + chandef->chan->center_freq); + return false; + } + + freq = ieee80211_channel_to_frequency(he_6ghz_oper->primary, + NL80211_BAND_6GHZ); + he_chandef.chan = ieee80211_get_channel(sdata->local->hw.wiphy, freq); + + switch (u8_get_bits(he_6ghz_oper->control, + IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH)) { + case IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_20MHZ: + he_chandef.width = NL80211_CHAN_WIDTH_20; + break; + case IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_40MHZ: + he_chandef.width = NL80211_CHAN_WIDTH_40; + break; + case IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_80MHZ: + he_chandef.width = NL80211_CHAN_WIDTH_80; + break; + case IEEE80211_HE_6GHZ_OPER_CTRL_CHANWIDTH_160MHZ: + he_chandef.width = NL80211_CHAN_WIDTH_80; + if (!he_6ghz_oper->ccfs1) + break; + if (abs(he_6ghz_oper->ccfs1 - he_6ghz_oper->ccfs0) == 8) { + if (support_160) + he_chandef.width = NL80211_CHAN_WIDTH_160; + } else { + if (support_80_80) + he_chandef.width = NL80211_CHAN_WIDTH_80P80; + } + break; + } + + if (he_chandef.width == NL80211_CHAN_WIDTH_160) { + he_chandef.center_freq1 = + ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1, + NL80211_BAND_6GHZ); + } else { + he_chandef.center_freq1 = + ieee80211_channel_to_frequency(he_6ghz_oper->ccfs0, + NL80211_BAND_6GHZ); + he_chandef.center_freq2 = + ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1, + NL80211_BAND_6GHZ); + } + + if (!cfg80211_chandef_valid(&he_chandef)) { + sdata_info(sdata, + "HE 6GHz operation resulted in invalid chandef: %d MHz/%d/%d MHz/%d MHz\n", + he_chandef.chan ? he_chandef.chan->center_freq : 0, + he_chandef.width, + he_chandef.center_freq1, + he_chandef.center_freq2); + return false; + } + + *chandef = he_chandef; + + return true; +} + int ieee80211_parse_bitrates(struct cfg80211_chan_def *chandef, const struct ieee80211_supported_band *sband, const u8 *srates, int srates_len, u32 *rates) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index b2c8b57e7942..14b253d10ccf 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -1032,7 +1032,8 @@ fallback: pr_debug("block timeout %ld", timeo); mptcp_wait_data(sk, &timeo); - if (unlikely(__mptcp_tcp_fallback(msk))) + ssock = __mptcp_tcp_fallback(msk); + if (unlikely(ssock)) goto fallback; } @@ -1346,11 +1347,14 @@ static void mptcp_close(struct sock *sk, long timeout) lock_sock(sk); - mptcp_token_destroy(msk->token); inet_sk_state_store(sk, TCP_CLOSE); - __mptcp_flush_join_list(msk); - + /* be sure to always acquire the join list lock, to sync vs + * mptcp_finish_join(). + */ + spin_lock_bh(&msk->join_list_lock); + list_splice_tail_init(&msk->join_list, &msk->conn_list); + spin_unlock_bh(&msk->join_list_lock); list_splice_init(&msk->conn_list, &conn_list); data_fin_tx_seq = msk->write_seq; @@ -1540,6 +1544,7 @@ static void mptcp_destroy(struct sock *sk) { struct mptcp_sock *msk = mptcp_sk(sk); + mptcp_token_destroy(msk->token); if (msk->cached_ext) __skb_ext_put(msk->cached_ext); @@ -1706,22 +1711,30 @@ bool mptcp_finish_join(struct sock *sk) if (!msk->pm.server_side) return true; - /* passive connection, attach to msk socket */ + if (!mptcp_pm_allow_new_subflow(msk)) + return false; + + /* active connections are already on conn_list, and we can't acquire + * msk lock here. + * use the join list lock as synchronization point and double-check + * msk status to avoid racing with mptcp_close() + */ + spin_lock_bh(&msk->join_list_lock); + ret = inet_sk_state_load(parent) == TCP_ESTABLISHED; + if (ret && !WARN_ON_ONCE(!list_empty(&subflow->node))) + list_add_tail(&subflow->node, &msk->join_list); + spin_unlock_bh(&msk->join_list_lock); + if (!ret) + return false; + + /* attach to msk socket only after we are sure he will deal with us + * at close time + */ parent_sock = READ_ONCE(parent->sk_socket); if (parent_sock && !sk->sk_socket) mptcp_sock_graft(sk, parent_sock); - - ret = mptcp_pm_allow_new_subflow(msk); - if (ret) { - subflow->map_seq = msk->ack_seq; - - /* active connections are already on conn_list */ - spin_lock_bh(&msk->join_list_lock); - if (!WARN_ON_ONCE(!list_empty(&subflow->node))) - list_add_tail(&subflow->node, &msk->join_list); - spin_unlock_bh(&msk->join_list_lock); - } - return ret; + subflow->map_seq = msk->ack_seq; + return true; } static bool mptcp_memory_free(const struct sock *sk, int wake) @@ -1788,6 +1801,14 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr, int err; lock_sock(sock->sk); + if (sock->state != SS_UNCONNECTED && msk->subflow) { + /* pending connection or invalid state, let existing subflow + * cope with that + */ + ssock = msk->subflow; + goto do_connect; + } + ssock = __mptcp_socket_create(msk, TCP_SYN_SENT); if (IS_ERR(ssock)) { err = PTR_ERR(ssock); @@ -1802,9 +1823,17 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr, mptcp_subflow_ctx(ssock->sk)->request_mptcp = 0; #endif +do_connect: err = ssock->ops->connect(ssock, uaddr, addr_len, flags); - inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk)); - mptcp_copy_inaddrs(sock->sk, ssock->sk); + sock->state = ssock->state; + + /* on successful connect, the msk state will be moved to established by + * subflow_finish_connect() + */ + if (!err || err == EINPROGRESS) + mptcp_copy_inaddrs(sock->sk, ssock->sk); + else + inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk)); unlock: release_sock(sock->sk); diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c index cd747c0962fd..5a67f7966574 100644 --- a/net/netfilter/ipset/ip_set_list_set.c +++ b/net/netfilter/ipset/ip_set_list_set.c @@ -59,7 +59,7 @@ list_set_ktest(struct ip_set *set, const struct sk_buff *skb, /* Don't lookup sub-counters at all */ opt->cmdflags &= ~IPSET_FLAG_MATCH_COUNTERS; if (opt->cmdflags & IPSET_FLAG_SKIP_SUBCOUNTER_UPDATE) - opt->cmdflags &= ~IPSET_FLAG_SKIP_COUNTER_UPDATE; + opt->cmdflags |= IPSET_FLAG_SKIP_COUNTER_UPDATE; list_for_each_entry_rcu(e, &map->members, list) { ret = ip_set_test(e->id, skb, par, opt); if (ret <= 0) diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c index 1d57b95d3481..79cd9dde457b 100644 --- a/net/netfilter/nf_conntrack_core.c +++ b/net/netfilter/nf_conntrack_core.c @@ -1974,13 +1974,22 @@ const struct nla_policy nf_ct_port_nla_policy[CTA_PROTO_MAX+1] = { EXPORT_SYMBOL_GPL(nf_ct_port_nla_policy); int nf_ct_port_nlattr_to_tuple(struct nlattr *tb[], - struct nf_conntrack_tuple *t) + struct nf_conntrack_tuple *t, + u_int32_t flags) { - if (!tb[CTA_PROTO_SRC_PORT] || !tb[CTA_PROTO_DST_PORT]) - return -EINVAL; + if (flags & CTA_FILTER_FLAG(CTA_PROTO_SRC_PORT)) { + if (!tb[CTA_PROTO_SRC_PORT]) + return -EINVAL; + + t->src.u.tcp.port = nla_get_be16(tb[CTA_PROTO_SRC_PORT]); + } - t->src.u.tcp.port = nla_get_be16(tb[CTA_PROTO_SRC_PORT]); - t->dst.u.tcp.port = nla_get_be16(tb[CTA_PROTO_DST_PORT]); + if (flags & CTA_FILTER_FLAG(CTA_PROTO_DST_PORT)) { + if (!tb[CTA_PROTO_DST_PORT]) + return -EINVAL; + + t->dst.u.tcp.port = nla_get_be16(tb[CTA_PROTO_DST_PORT]); + } return 0; } @@ -2016,22 +2025,18 @@ static void nf_conntrack_attach(struct sk_buff *nskb, const struct sk_buff *skb) nf_conntrack_get(skb_nfct(nskb)); } -static int nf_conntrack_update(struct net *net, struct sk_buff *skb) +static int __nf_conntrack_update(struct net *net, struct sk_buff *skb, + struct nf_conn *ct, + enum ip_conntrack_info ctinfo) { struct nf_conntrack_tuple_hash *h; struct nf_conntrack_tuple tuple; - enum ip_conntrack_info ctinfo; struct nf_nat_hook *nat_hook; unsigned int status; - struct nf_conn *ct; int dataoff; u16 l3num; u8 l4num; - ct = nf_ct_get(skb, &ctinfo); - if (!ct || nf_ct_is_confirmed(ct)) - return 0; - l3num = nf_ct_l3num(ct); dataoff = get_l4proto(skb, skb_network_offset(skb), l3num, &l4num); @@ -2088,6 +2093,76 @@ static int nf_conntrack_update(struct net *net, struct sk_buff *skb) return 0; } +/* This packet is coming from userspace via nf_queue, complete the packet + * processing after the helper invocation in nf_confirm(). + */ +static int nf_confirm_cthelper(struct sk_buff *skb, struct nf_conn *ct, + enum ip_conntrack_info ctinfo) +{ + const struct nf_conntrack_helper *helper; + const struct nf_conn_help *help; + int protoff; + + help = nfct_help(ct); + if (!help) + return 0; + + helper = rcu_dereference(help->helper); + if (!(helper->flags & NF_CT_HELPER_F_USERSPACE)) + return 0; + + switch (nf_ct_l3num(ct)) { + case NFPROTO_IPV4: + protoff = skb_network_offset(skb) + ip_hdrlen(skb); + break; +#if IS_ENABLED(CONFIG_IPV6) + case NFPROTO_IPV6: { + __be16 frag_off; + u8 pnum; + + pnum = ipv6_hdr(skb)->nexthdr; + protoff = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &pnum, + &frag_off); + if (protoff < 0 || (frag_off & htons(~0x7)) != 0) + return 0; + break; + } +#endif + default: + return 0; + } + + if (test_bit(IPS_SEQ_ADJUST_BIT, &ct->status) && + !nf_is_loopback_packet(skb)) { + if (!nf_ct_seq_adjust(skb, ct, ctinfo, protoff)) { + NF_CT_STAT_INC_ATOMIC(nf_ct_net(ct), drop); + return -1; + } + } + + /* We've seen it coming out the other side: confirm it */ + return nf_conntrack_confirm(skb) == NF_DROP ? - 1 : 0; +} + +static int nf_conntrack_update(struct net *net, struct sk_buff *skb) +{ + enum ip_conntrack_info ctinfo; + struct nf_conn *ct; + int err; + + ct = nf_ct_get(skb, &ctinfo); + if (!ct) + return 0; + + if (!nf_ct_is_confirmed(ct)) { + err = __nf_conntrack_update(net, skb, ct, ctinfo); + if (err < 0) + return err; + } + + return nf_confirm_cthelper(skb, ct, ctinfo); +} + static bool nf_conntrack_get_tuple_skb(struct nf_conntrack_tuple *dst_tuple, const struct sk_buff *skb) { diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c index 9ddfcd002d3b..d7bd8b1f27d5 100644 --- a/net/netfilter/nf_conntrack_netlink.c +++ b/net/netfilter/nf_conntrack_netlink.c @@ -54,6 +54,8 @@ #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_conntrack.h> +#include "nf_internals.h" + MODULE_LICENSE("GPL"); static int ctnetlink_dump_tuples_proto(struct sk_buff *skb, @@ -544,14 +546,16 @@ static int ctnetlink_dump_info(struct sk_buff *skb, struct nf_conn *ct) static int ctnetlink_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type, - struct nf_conn *ct, bool extinfo) + struct nf_conn *ct, bool extinfo, unsigned int flags) { const struct nf_conntrack_zone *zone; struct nlmsghdr *nlh; struct nfgenmsg *nfmsg; struct nlattr *nest_parms; - unsigned int flags = portid ? NLM_F_MULTI : 0, event; + unsigned int event; + if (portid) + flags |= NLM_F_MULTI; event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, IPCTNL_MSG_CT_NEW); nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags); if (nlh == NULL) @@ -847,17 +851,70 @@ static int ctnetlink_done(struct netlink_callback *cb) } struct ctnetlink_filter { + u_int32_t cta_flags; u8 family; + + u_int32_t orig_flags; + u_int32_t reply_flags; + + struct nf_conntrack_tuple orig; + struct nf_conntrack_tuple reply; + struct nf_conntrack_zone zone; + struct { u_int32_t val; u_int32_t mask; } mark; }; +static const struct nla_policy cta_filter_nla_policy[CTA_FILTER_MAX + 1] = { + [CTA_FILTER_ORIG_FLAGS] = { .type = NLA_U32 }, + [CTA_FILTER_REPLY_FLAGS] = { .type = NLA_U32 }, +}; + +static int ctnetlink_parse_filter(const struct nlattr *attr, + struct ctnetlink_filter *filter) +{ + struct nlattr *tb[CTA_FILTER_MAX + 1]; + int ret = 0; + + ret = nla_parse_nested(tb, CTA_FILTER_MAX, attr, cta_filter_nla_policy, + NULL); + if (ret) + return ret; + + if (tb[CTA_FILTER_ORIG_FLAGS]) { + filter->orig_flags = nla_get_u32(tb[CTA_FILTER_ORIG_FLAGS]); + if (filter->orig_flags & ~CTA_FILTER_F_ALL) + return -EOPNOTSUPP; + } + + if (tb[CTA_FILTER_REPLY_FLAGS]) { + filter->reply_flags = nla_get_u32(tb[CTA_FILTER_REPLY_FLAGS]); + if (filter->reply_flags & ~CTA_FILTER_F_ALL) + return -EOPNOTSUPP; + } + + return 0; +} + +static int ctnetlink_parse_zone(const struct nlattr *attr, + struct nf_conntrack_zone *zone); +static int ctnetlink_parse_tuple_filter(const struct nlattr * const cda[], + struct nf_conntrack_tuple *tuple, + u32 type, u_int8_t l3num, + struct nf_conntrack_zone *zone, + u_int32_t flags); + +/* applied on filters */ +#define CTA_FILTER_F_CTA_MARK (1 << 0) +#define CTA_FILTER_F_CTA_MARK_MASK (1 << 1) + static struct ctnetlink_filter * ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family) { struct ctnetlink_filter *filter; + int err; #ifndef CONFIG_NF_CONNTRACK_MARK if (cda[CTA_MARK] || cda[CTA_MARK_MASK]) @@ -871,14 +928,65 @@ ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family) filter->family = family; #ifdef CONFIG_NF_CONNTRACK_MARK - if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) { + if (cda[CTA_MARK]) { filter->mark.val = ntohl(nla_get_be32(cda[CTA_MARK])); - filter->mark.mask = ntohl(nla_get_be32(cda[CTA_MARK_MASK])); + filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK); + + if (cda[CTA_MARK_MASK]) { + filter->mark.mask = ntohl(nla_get_be32(cda[CTA_MARK_MASK])); + filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK_MASK); + } else { + filter->mark.mask = 0xffffffff; + } + } else if (cda[CTA_MARK_MASK]) { + return ERR_PTR(-EINVAL); } #endif + if (!cda[CTA_FILTER]) + return filter; + + err = ctnetlink_parse_zone(cda[CTA_ZONE], &filter->zone); + if (err < 0) + return ERR_PTR(err); + + err = ctnetlink_parse_filter(cda[CTA_FILTER], filter); + if (err < 0) + return ERR_PTR(err); + + if (filter->orig_flags) { + if (!cda[CTA_TUPLE_ORIG]) + return ERR_PTR(-EINVAL); + + err = ctnetlink_parse_tuple_filter(cda, &filter->orig, + CTA_TUPLE_ORIG, + filter->family, + &filter->zone, + filter->orig_flags); + if (err < 0) + return ERR_PTR(err); + } + + if (filter->reply_flags) { + if (!cda[CTA_TUPLE_REPLY]) + return ERR_PTR(-EINVAL); + + err = ctnetlink_parse_tuple_filter(cda, &filter->reply, + CTA_TUPLE_REPLY, + filter->family, + &filter->zone, + filter->orig_flags); + if (err < 0) + return ERR_PTR(err); + } + return filter; } +static bool ctnetlink_needs_filter(u8 family, const struct nlattr * const *cda) +{ + return family || cda[CTA_MARK] || cda[CTA_FILTER]; +} + static int ctnetlink_start(struct netlink_callback *cb) { const struct nlattr * const *cda = cb->data; @@ -886,7 +994,7 @@ static int ctnetlink_start(struct netlink_callback *cb) struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh); u8 family = nfmsg->nfgen_family; - if (family || (cda[CTA_MARK] && cda[CTA_MARK_MASK])) { + if (ctnetlink_needs_filter(family, cda)) { filter = ctnetlink_alloc_filter(cda, family); if (IS_ERR(filter)) return PTR_ERR(filter); @@ -896,9 +1004,79 @@ static int ctnetlink_start(struct netlink_callback *cb) return 0; } +static int ctnetlink_filter_match_tuple(struct nf_conntrack_tuple *filter_tuple, + struct nf_conntrack_tuple *ct_tuple, + u_int32_t flags, int family) +{ + switch (family) { + case NFPROTO_IPV4: + if ((flags & CTA_FILTER_FLAG(CTA_IP_SRC)) && + filter_tuple->src.u3.ip != ct_tuple->src.u3.ip) + return 0; + + if ((flags & CTA_FILTER_FLAG(CTA_IP_DST)) && + filter_tuple->dst.u3.ip != ct_tuple->dst.u3.ip) + return 0; + break; + case NFPROTO_IPV6: + if ((flags & CTA_FILTER_FLAG(CTA_IP_SRC)) && + !ipv6_addr_cmp(&filter_tuple->src.u3.in6, + &ct_tuple->src.u3.in6)) + return 0; + + if ((flags & CTA_FILTER_FLAG(CTA_IP_DST)) && + !ipv6_addr_cmp(&filter_tuple->dst.u3.in6, + &ct_tuple->dst.u3.in6)) + return 0; + break; + } + + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_NUM)) && + filter_tuple->dst.protonum != ct_tuple->dst.protonum) + return 0; + + switch (ct_tuple->dst.protonum) { + case IPPROTO_TCP: + case IPPROTO_UDP: + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_SRC_PORT)) && + filter_tuple->src.u.tcp.port != ct_tuple->src.u.tcp.port) + return 0; + + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_DST_PORT)) && + filter_tuple->dst.u.tcp.port != ct_tuple->dst.u.tcp.port) + return 0; + break; + case IPPROTO_ICMP: + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_ICMP_TYPE)) && + filter_tuple->dst.u.icmp.type != ct_tuple->dst.u.icmp.type) + return 0; + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_ICMP_CODE)) && + filter_tuple->dst.u.icmp.code != ct_tuple->dst.u.icmp.code) + return 0; + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_ICMP_ID)) && + filter_tuple->src.u.icmp.id != ct_tuple->src.u.icmp.id) + return 0; + break; + case IPPROTO_ICMPV6: + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_ICMPV6_TYPE)) && + filter_tuple->dst.u.icmp.type != ct_tuple->dst.u.icmp.type) + return 0; + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_ICMPV6_CODE)) && + filter_tuple->dst.u.icmp.code != ct_tuple->dst.u.icmp.code) + return 0; + if ((flags & CTA_FILTER_FLAG(CTA_PROTO_ICMPV6_ID)) && + filter_tuple->src.u.icmp.id != ct_tuple->src.u.icmp.id) + return 0; + break; + } + + return 1; +} + static int ctnetlink_filter_match(struct nf_conn *ct, void *data) { struct ctnetlink_filter *filter = data; + struct nf_conntrack_tuple *tuple; if (filter == NULL) goto out; @@ -910,8 +1088,28 @@ static int ctnetlink_filter_match(struct nf_conn *ct, void *data) if (filter->family && nf_ct_l3num(ct) != filter->family) goto ignore_entry; + if (filter->orig_flags) { + tuple = nf_ct_tuple(ct, IP_CT_DIR_ORIGINAL); + if (!ctnetlink_filter_match_tuple(&filter->orig, tuple, + filter->orig_flags, + filter->family)) + goto ignore_entry; + } + + if (filter->reply_flags) { + tuple = nf_ct_tuple(ct, IP_CT_DIR_REPLY); + if (!ctnetlink_filter_match_tuple(&filter->reply, tuple, + filter->reply_flags, + filter->family)) + goto ignore_entry; + } + #ifdef CONFIG_NF_CONNTRACK_MARK - if ((ct->mark & filter->mark.mask) != filter->mark.val) + if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK_MASK)) && + (ct->mark & filter->mark.mask) != filter->mark.val) + goto ignore_entry; + else if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK)) && + ct->mark != filter->mark.val) goto ignore_entry; #endif @@ -925,6 +1123,7 @@ ignore_entry: static int ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb) { + unsigned int flags = cb->data ? NLM_F_DUMP_FILTERED : 0; struct net *net = sock_net(skb->sk); struct nf_conn *ct, *last; struct nf_conntrack_tuple_hash *h; @@ -979,7 +1178,7 @@ restart: ctnetlink_fill_info(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, NFNL_MSG_TYPE(cb->nlh->nlmsg_type), - ct, true); + ct, true, flags); if (res < 0) { nf_conntrack_get(&ct->ct_general); cb->args[1] = (unsigned long)ct; @@ -1014,31 +1213,50 @@ out: } static int ipv4_nlattr_to_tuple(struct nlattr *tb[], - struct nf_conntrack_tuple *t) + struct nf_conntrack_tuple *t, + u_int32_t flags) { - if (!tb[CTA_IP_V4_SRC] || !tb[CTA_IP_V4_DST]) - return -EINVAL; + if (flags & CTA_FILTER_FLAG(CTA_IP_SRC)) { + if (!tb[CTA_IP_V4_SRC]) + return -EINVAL; + + t->src.u3.ip = nla_get_in_addr(tb[CTA_IP_V4_SRC]); + } + + if (flags & CTA_FILTER_FLAG(CTA_IP_DST)) { + if (!tb[CTA_IP_V4_DST]) + return -EINVAL; - t->src.u3.ip = nla_get_in_addr(tb[CTA_IP_V4_SRC]); - t->dst.u3.ip = nla_get_in_addr(tb[CTA_IP_V4_DST]); + t->dst.u3.ip = nla_get_in_addr(tb[CTA_IP_V4_DST]); + } return 0; } static int ipv6_nlattr_to_tuple(struct nlattr *tb[], - struct nf_conntrack_tuple *t) + struct nf_conntrack_tuple *t, + u_int32_t flags) { - if (!tb[CTA_IP_V6_SRC] || !tb[CTA_IP_V6_DST]) - return -EINVAL; + if (flags & CTA_FILTER_FLAG(CTA_IP_SRC)) { + if (!tb[CTA_IP_V6_SRC]) + return -EINVAL; - t->src.u3.in6 = nla_get_in6_addr(tb[CTA_IP_V6_SRC]); - t->dst.u3.in6 = nla_get_in6_addr(tb[CTA_IP_V6_DST]); + t->src.u3.in6 = nla_get_in6_addr(tb[CTA_IP_V6_SRC]); + } + + if (flags & CTA_FILTER_FLAG(CTA_IP_DST)) { + if (!tb[CTA_IP_V6_DST]) + return -EINVAL; + + t->dst.u3.in6 = nla_get_in6_addr(tb[CTA_IP_V6_DST]); + } return 0; } static int ctnetlink_parse_tuple_ip(struct nlattr *attr, - struct nf_conntrack_tuple *tuple) + struct nf_conntrack_tuple *tuple, + u_int32_t flags) { struct nlattr *tb[CTA_IP_MAX+1]; int ret = 0; @@ -1054,10 +1272,10 @@ static int ctnetlink_parse_tuple_ip(struct nlattr *attr, switch (tuple->src.l3num) { case NFPROTO_IPV4: - ret = ipv4_nlattr_to_tuple(tb, tuple); + ret = ipv4_nlattr_to_tuple(tb, tuple, flags); break; case NFPROTO_IPV6: - ret = ipv6_nlattr_to_tuple(tb, tuple); + ret = ipv6_nlattr_to_tuple(tb, tuple, flags); break; } @@ -1069,7 +1287,8 @@ static const struct nla_policy proto_nla_policy[CTA_PROTO_MAX+1] = { }; static int ctnetlink_parse_tuple_proto(struct nlattr *attr, - struct nf_conntrack_tuple *tuple) + struct nf_conntrack_tuple *tuple, + u_int32_t flags) { const struct nf_conntrack_l4proto *l4proto; struct nlattr *tb[CTA_PROTO_MAX+1]; @@ -1080,8 +1299,12 @@ static int ctnetlink_parse_tuple_proto(struct nlattr *attr, if (ret < 0) return ret; + if (!(flags & CTA_FILTER_FLAG(CTA_PROTO_NUM))) + return 0; + if (!tb[CTA_PROTO_NUM]) return -EINVAL; + tuple->dst.protonum = nla_get_u8(tb[CTA_PROTO_NUM]); rcu_read_lock(); @@ -1092,7 +1315,7 @@ static int ctnetlink_parse_tuple_proto(struct nlattr *attr, l4proto->nla_policy, NULL); if (ret == 0) - ret = l4proto->nlattr_to_tuple(tb, tuple); + ret = l4proto->nlattr_to_tuple(tb, tuple, flags); } rcu_read_unlock(); @@ -1143,10 +1366,21 @@ static const struct nla_policy tuple_nla_policy[CTA_TUPLE_MAX+1] = { [CTA_TUPLE_ZONE] = { .type = NLA_U16 }, }; +#define CTA_FILTER_F_ALL_CTA_PROTO \ + (CTA_FILTER_F_CTA_PROTO_SRC_PORT | \ + CTA_FILTER_F_CTA_PROTO_DST_PORT | \ + CTA_FILTER_F_CTA_PROTO_ICMP_TYPE | \ + CTA_FILTER_F_CTA_PROTO_ICMP_CODE | \ + CTA_FILTER_F_CTA_PROTO_ICMP_ID | \ + CTA_FILTER_F_CTA_PROTO_ICMPV6_TYPE | \ + CTA_FILTER_F_CTA_PROTO_ICMPV6_CODE | \ + CTA_FILTER_F_CTA_PROTO_ICMPV6_ID) + static int -ctnetlink_parse_tuple(const struct nlattr * const cda[], - struct nf_conntrack_tuple *tuple, u32 type, - u_int8_t l3num, struct nf_conntrack_zone *zone) +ctnetlink_parse_tuple_filter(const struct nlattr * const cda[], + struct nf_conntrack_tuple *tuple, u32 type, + u_int8_t l3num, struct nf_conntrack_zone *zone, + u_int32_t flags) { struct nlattr *tb[CTA_TUPLE_MAX+1]; int err; @@ -1158,23 +1392,32 @@ ctnetlink_parse_tuple(const struct nlattr * const cda[], if (err < 0) return err; - if (!tb[CTA_TUPLE_IP]) - return -EINVAL; tuple->src.l3num = l3num; - err = ctnetlink_parse_tuple_ip(tb[CTA_TUPLE_IP], tuple); - if (err < 0) - return err; + if (flags & CTA_FILTER_FLAG(CTA_IP_DST) || + flags & CTA_FILTER_FLAG(CTA_IP_SRC)) { + if (!tb[CTA_TUPLE_IP]) + return -EINVAL; - if (!tb[CTA_TUPLE_PROTO]) - return -EINVAL; + err = ctnetlink_parse_tuple_ip(tb[CTA_TUPLE_IP], tuple, flags); + if (err < 0) + return err; + } - err = ctnetlink_parse_tuple_proto(tb[CTA_TUPLE_PROTO], tuple); - if (err < 0) - return err; + if (flags & CTA_FILTER_FLAG(CTA_PROTO_NUM)) { + if (!tb[CTA_TUPLE_PROTO]) + return -EINVAL; - if (tb[CTA_TUPLE_ZONE]) { + err = ctnetlink_parse_tuple_proto(tb[CTA_TUPLE_PROTO], tuple, flags); + if (err < 0) + return err; + } else if (flags & CTA_FILTER_FLAG(ALL_CTA_PROTO)) { + /* Can't manage proto flags without a protonum */ + return -EINVAL; + } + + if ((flags & CTA_FILTER_FLAG(CTA_TUPLE_ZONE)) && tb[CTA_TUPLE_ZONE]) { if (!zone) return -EINVAL; @@ -1193,6 +1436,15 @@ ctnetlink_parse_tuple(const struct nlattr * const cda[], return 0; } +static int +ctnetlink_parse_tuple(const struct nlattr * const cda[], + struct nf_conntrack_tuple *tuple, u32 type, + u_int8_t l3num, struct nf_conntrack_zone *zone) +{ + return ctnetlink_parse_tuple_filter(cda, tuple, type, l3num, zone, + CTA_FILTER_FLAG(ALL)); +} + static const struct nla_policy help_nla_policy[CTA_HELP_MAX+1] = { [CTA_HELP_NAME] = { .type = NLA_NUL_STRING, .len = NF_CT_HELPER_NAME_LEN - 1 }, @@ -1240,6 +1492,7 @@ static const struct nla_policy ct_nla_policy[CTA_MAX+1] = { .len = NF_CT_LABELS_MAX_SIZE }, [CTA_LABELS_MASK] = { .type = NLA_BINARY, .len = NF_CT_LABELS_MAX_SIZE }, + [CTA_FILTER] = { .type = NLA_NESTED }, }; static int ctnetlink_flush_iterate(struct nf_conn *ct, void *data) @@ -1256,7 +1509,10 @@ static int ctnetlink_flush_conntrack(struct net *net, { struct ctnetlink_filter *filter = NULL; - if (family || (cda[CTA_MARK] && cda[CTA_MARK_MASK])) { + if (ctnetlink_needs_filter(family, cda)) { + if (cda[CTA_FILTER]) + return -EOPNOTSUPP; + filter = ctnetlink_alloc_filter(cda, family); if (IS_ERR(filter)) return PTR_ERR(filter); @@ -1385,7 +1641,7 @@ static int ctnetlink_get_conntrack(struct net *net, struct sock *ctnl, } err = ctnetlink_fill_info(skb2, NETLINK_CB(skb).portid, nlh->nlmsg_seq, - NFNL_MSG_TYPE(nlh->nlmsg_type), ct, true); + NFNL_MSG_TYPE(nlh->nlmsg_type), ct, true, 0); nf_ct_put(ct); if (err <= 0) goto free; @@ -1458,7 +1714,7 @@ restart: res = ctnetlink_fill_info(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, NFNL_MSG_TYPE(cb->nlh->nlmsg_type), - ct, dying ? true : false); + ct, dying ? true : false, 0); if (res < 0) { if (!atomic_inc_not_zero(&ct->ct_general.use)) continue; diff --git a/net/netfilter/nf_conntrack_pptp.c b/net/netfilter/nf_conntrack_pptp.c index a971183f11af..1f44d523b512 100644 --- a/net/netfilter/nf_conntrack_pptp.c +++ b/net/netfilter/nf_conntrack_pptp.c @@ -72,24 +72,32 @@ EXPORT_SYMBOL_GPL(nf_nat_pptp_hook_expectfn); #if defined(DEBUG) || defined(CONFIG_DYNAMIC_DEBUG) /* PptpControlMessageType names */ -const char *const pptp_msg_name[] = { - "UNKNOWN_MESSAGE", - "START_SESSION_REQUEST", - "START_SESSION_REPLY", - "STOP_SESSION_REQUEST", - "STOP_SESSION_REPLY", - "ECHO_REQUEST", - "ECHO_REPLY", - "OUT_CALL_REQUEST", - "OUT_CALL_REPLY", - "IN_CALL_REQUEST", - "IN_CALL_REPLY", - "IN_CALL_CONNECT", - "CALL_CLEAR_REQUEST", - "CALL_DISCONNECT_NOTIFY", - "WAN_ERROR_NOTIFY", - "SET_LINK_INFO" +static const char *const pptp_msg_name_array[PPTP_MSG_MAX + 1] = { + [0] = "UNKNOWN_MESSAGE", + [PPTP_START_SESSION_REQUEST] = "START_SESSION_REQUEST", + [PPTP_START_SESSION_REPLY] = "START_SESSION_REPLY", + [PPTP_STOP_SESSION_REQUEST] = "STOP_SESSION_REQUEST", + [PPTP_STOP_SESSION_REPLY] = "STOP_SESSION_REPLY", + [PPTP_ECHO_REQUEST] = "ECHO_REQUEST", + [PPTP_ECHO_REPLY] = "ECHO_REPLY", + [PPTP_OUT_CALL_REQUEST] = "OUT_CALL_REQUEST", + [PPTP_OUT_CALL_REPLY] = "OUT_CALL_REPLY", + [PPTP_IN_CALL_REQUEST] = "IN_CALL_REQUEST", + [PPTP_IN_CALL_REPLY] = "IN_CALL_REPLY", + [PPTP_IN_CALL_CONNECT] = "IN_CALL_CONNECT", + [PPTP_CALL_CLEAR_REQUEST] = "CALL_CLEAR_REQUEST", + [PPTP_CALL_DISCONNECT_NOTIFY] = "CALL_DISCONNECT_NOTIFY", + [PPTP_WAN_ERROR_NOTIFY] = "WAN_ERROR_NOTIFY", + [PPTP_SET_LINK_INFO] = "SET_LINK_INFO" }; + +const char *pptp_msg_name(u_int16_t msg) +{ + if (msg > PPTP_MSG_MAX) + return pptp_msg_name_array[0]; + + return pptp_msg_name_array[msg]; +} EXPORT_SYMBOL(pptp_msg_name); #endif @@ -276,7 +284,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff, typeof(nf_nat_pptp_hook_inbound) nf_nat_pptp_inbound; msg = ntohs(ctlh->messageType); - pr_debug("inbound control message %s\n", pptp_msg_name[msg]); + pr_debug("inbound control message %s\n", pptp_msg_name(msg)); switch (msg) { case PPTP_START_SESSION_REPLY: @@ -311,7 +319,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff, pcid = pptpReq->ocack.peersCallID; if (info->pns_call_id != pcid) goto invalid; - pr_debug("%s, CID=%X, PCID=%X\n", pptp_msg_name[msg], + pr_debug("%s, CID=%X, PCID=%X\n", pptp_msg_name(msg), ntohs(cid), ntohs(pcid)); if (pptpReq->ocack.resultCode == PPTP_OUTCALL_CONNECT) { @@ -328,7 +336,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff, goto invalid; cid = pptpReq->icreq.callID; - pr_debug("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid)); + pr_debug("%s, CID=%X\n", pptp_msg_name(msg), ntohs(cid)); info->cstate = PPTP_CALL_IN_REQ; info->pac_call_id = cid; break; @@ -347,7 +355,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff, if (info->pns_call_id != pcid) goto invalid; - pr_debug("%s, PCID=%X\n", pptp_msg_name[msg], ntohs(pcid)); + pr_debug("%s, PCID=%X\n", pptp_msg_name(msg), ntohs(pcid)); info->cstate = PPTP_CALL_IN_CONF; /* we expect a GRE connection from PAC to PNS */ @@ -357,7 +365,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff, case PPTP_CALL_DISCONNECT_NOTIFY: /* server confirms disconnect */ cid = pptpReq->disc.callID; - pr_debug("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid)); + pr_debug("%s, CID=%X\n", pptp_msg_name(msg), ntohs(cid)); info->cstate = PPTP_CALL_NONE; /* untrack this call id, unexpect GRE packets */ @@ -384,7 +392,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff, invalid: pr_debug("invalid %s: type=%d cid=%u pcid=%u " "cstate=%d sstate=%d pns_cid=%u pac_cid=%u\n", - msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : pptp_msg_name[0], + pptp_msg_name(msg), msg, ntohs(cid), ntohs(pcid), info->cstate, info->sstate, ntohs(info->pns_call_id), ntohs(info->pac_call_id)); return NF_ACCEPT; @@ -404,7 +412,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff, typeof(nf_nat_pptp_hook_outbound) nf_nat_pptp_outbound; msg = ntohs(ctlh->messageType); - pr_debug("outbound control message %s\n", pptp_msg_name[msg]); + pr_debug("outbound control message %s\n", pptp_msg_name(msg)); switch (msg) { case PPTP_START_SESSION_REQUEST: @@ -426,7 +434,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff, info->cstate = PPTP_CALL_OUT_REQ; /* track PNS call id */ cid = pptpReq->ocreq.callID; - pr_debug("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid)); + pr_debug("%s, CID=%X\n", pptp_msg_name(msg), ntohs(cid)); info->pns_call_id = cid; break; @@ -440,7 +448,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff, pcid = pptpReq->icack.peersCallID; if (info->pac_call_id != pcid) goto invalid; - pr_debug("%s, CID=%X PCID=%X\n", pptp_msg_name[msg], + pr_debug("%s, CID=%X PCID=%X\n", pptp_msg_name(msg), ntohs(cid), ntohs(pcid)); if (pptpReq->icack.resultCode == PPTP_INCALL_ACCEPT) { @@ -480,7 +488,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff, invalid: pr_debug("invalid %s: type=%d cid=%u pcid=%u " "cstate=%d sstate=%d pns_cid=%u pac_cid=%u\n", - msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : pptp_msg_name[0], + pptp_msg_name(msg), msg, ntohs(cid), ntohs(pcid), info->cstate, info->sstate, ntohs(info->pns_call_id), ntohs(info->pac_call_id)); return NF_ACCEPT; diff --git a/net/netfilter/nf_conntrack_proto_icmp.c b/net/netfilter/nf_conntrack_proto_icmp.c index c2e3dff773bc..4efd8741c105 100644 --- a/net/netfilter/nf_conntrack_proto_icmp.c +++ b/net/netfilter/nf_conntrack_proto_icmp.c @@ -20,6 +20,8 @@ #include <net/netfilter/nf_conntrack_zones.h> #include <net/netfilter/nf_log.h> +#include "nf_internals.h" + static const unsigned int nf_ct_icmp_timeout = 30*HZ; bool icmp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, @@ -271,20 +273,32 @@ static const struct nla_policy icmp_nla_policy[CTA_PROTO_MAX+1] = { }; static int icmp_nlattr_to_tuple(struct nlattr *tb[], - struct nf_conntrack_tuple *tuple) + struct nf_conntrack_tuple *tuple, + u_int32_t flags) { - if (!tb[CTA_PROTO_ICMP_TYPE] || - !tb[CTA_PROTO_ICMP_CODE] || - !tb[CTA_PROTO_ICMP_ID]) - return -EINVAL; - - tuple->dst.u.icmp.type = nla_get_u8(tb[CTA_PROTO_ICMP_TYPE]); - tuple->dst.u.icmp.code = nla_get_u8(tb[CTA_PROTO_ICMP_CODE]); - tuple->src.u.icmp.id = nla_get_be16(tb[CTA_PROTO_ICMP_ID]); - - if (tuple->dst.u.icmp.type >= sizeof(invmap) || - !invmap[tuple->dst.u.icmp.type]) - return -EINVAL; + if (flags & CTA_FILTER_FLAG(CTA_PROTO_ICMP_TYPE)) { + if (!tb[CTA_PROTO_ICMP_TYPE]) + return -EINVAL; + + tuple->dst.u.icmp.type = nla_get_u8(tb[CTA_PROTO_ICMP_TYPE]); + if (tuple->dst.u.icmp.type >= sizeof(invmap) || + !invmap[tuple->dst.u.icmp.type]) + return -EINVAL; + } + + if (flags & CTA_FILTER_FLAG(CTA_PROTO_ICMP_CODE)) { + if (!tb[CTA_PROTO_ICMP_CODE]) + return -EINVAL; + + tuple->dst.u.icmp.code = nla_get_u8(tb[CTA_PROTO_ICMP_CODE]); + } + + if (flags & CTA_FILTER_FLAG(CTA_PROTO_ICMP_ID)) { + if (!tb[CTA_PROTO_ICMP_ID]) + return -EINVAL; + + tuple->src.u.icmp.id = nla_get_be16(tb[CTA_PROTO_ICMP_ID]); + } return 0; } diff --git a/net/netfilter/nf_conntrack_proto_icmpv6.c b/net/netfilter/nf_conntrack_proto_icmpv6.c index 6f9144e1f1c1..facd8c64ec4e 100644 --- a/net/netfilter/nf_conntrack_proto_icmpv6.c +++ b/net/netfilter/nf_conntrack_proto_icmpv6.c @@ -24,6 +24,8 @@ #include <net/netfilter/nf_conntrack_zones.h> #include <net/netfilter/nf_log.h> +#include "nf_internals.h" + static const unsigned int nf_ct_icmpv6_timeout = 30*HZ; bool icmpv6_pkt_to_tuple(const struct sk_buff *skb, @@ -193,21 +195,33 @@ static const struct nla_policy icmpv6_nla_policy[CTA_PROTO_MAX+1] = { }; static int icmpv6_nlattr_to_tuple(struct nlattr *tb[], - struct nf_conntrack_tuple *tuple) + struct nf_conntrack_tuple *tuple, + u_int32_t flags) { - if (!tb[CTA_PROTO_ICMPV6_TYPE] || - !tb[CTA_PROTO_ICMPV6_CODE] || - !tb[CTA_PROTO_ICMPV6_ID]) - return -EINVAL; - - tuple->dst.u.icmp.type = nla_get_u8(tb[CTA_PROTO_ICMPV6_TYPE]); - tuple->dst.u.icmp.code = nla_get_u8(tb[CTA_PROTO_ICMPV6_CODE]); - tuple->src.u.icmp.id = nla_get_be16(tb[CTA_PROTO_ICMPV6_ID]); - - if (tuple->dst.u.icmp.type < 128 || - tuple->dst.u.icmp.type - 128 >= sizeof(invmap) || - !invmap[tuple->dst.u.icmp.type - 128]) - return -EINVAL; + if (flags & CTA_FILTER_FLAG(CTA_PROTO_ICMPV6_TYPE)) { + if (!tb[CTA_PROTO_ICMPV6_TYPE]) + return -EINVAL; + + tuple->dst.u.icmp.type = nla_get_u8(tb[CTA_PROTO_ICMPV6_TYPE]); + if (tuple->dst.u.icmp.type < 128 || + tuple->dst.u.icmp.type - 128 >= sizeof(invmap) || + !invmap[tuple->dst.u.icmp.type - 128]) + return -EINVAL; + } + + if (flags & CTA_FILTER_FLAG(CTA_PROTO_ICMPV6_CODE)) { + if (!tb[CTA_PROTO_ICMPV6_CODE]) + return -EINVAL; + + tuple->dst.u.icmp.code = nla_get_u8(tb[CTA_PROTO_ICMPV6_CODE]); + } + + if (flags & CTA_FILTER_FLAG(CTA_PROTO_ICMPV6_ID)) { + if (!tb[CTA_PROTO_ICMPV6_ID]) + return -EINVAL; + + tuple->src.u.icmp.id = nla_get_be16(tb[CTA_PROTO_ICMPV6_ID]); + } return 0; } diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c index 42da6e337276..6a3034f84ab6 100644 --- a/net/netfilter/nf_flow_table_core.c +++ b/net/netfilter/nf_flow_table_core.c @@ -588,8 +588,8 @@ static void nf_flow_table_do_cleanup(struct flow_offload *flow, void *data) flow_offload_teardown(flow); } -static void nf_flow_table_iterate_cleanup(struct nf_flowtable *flowtable, - struct net_device *dev) +void nf_flow_table_gc_cleanup(struct nf_flowtable *flowtable, + struct net_device *dev) { nf_flow_table_iterate(flowtable, nf_flow_table_do_cleanup, dev); flush_delayed_work(&flowtable->gc_work); @@ -602,7 +602,7 @@ void nf_flow_table_cleanup(struct net_device *dev) mutex_lock(&flowtable_lock); list_for_each_entry(flowtable, &flowtables, list) - nf_flow_table_iterate_cleanup(flowtable, dev); + nf_flow_table_gc_cleanup(flowtable, dev); mutex_unlock(&flowtable_lock); } EXPORT_SYMBOL_GPL(nf_flow_table_cleanup); diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c index 2ff4087007a6..62651e6683f6 100644 --- a/net/netfilter/nf_flow_table_offload.c +++ b/net/netfilter/nf_flow_table_offload.c @@ -942,6 +942,18 @@ static void nf_flow_table_block_offload_init(struct flow_block_offload *bo, INIT_LIST_HEAD(&bo->cb_list); } +static void nf_flow_table_indr_cleanup(struct flow_block_cb *block_cb) +{ + struct nf_flowtable *flowtable = block_cb->indr.data; + struct net_device *dev = block_cb->indr.dev; + + nf_flow_table_gc_cleanup(flowtable, dev); + down_write(&flowtable->flow_block_lock); + list_del(&block_cb->list); + flow_block_cb_free(block_cb); + up_write(&flowtable->flow_block_lock); +} + static int nf_flow_table_indr_offload_cmd(struct flow_block_offload *bo, struct nf_flowtable *flowtable, struct net_device *dev, @@ -950,12 +962,9 @@ static int nf_flow_table_indr_offload_cmd(struct flow_block_offload *bo, { nf_flow_table_block_offload_init(bo, dev_net(dev), cmd, flowtable, extack); - flow_indr_block_call(dev, bo, cmd, TC_SETUP_FT); - - if (list_empty(&bo->cb_list)) - return -EOPNOTSUPP; - return 0; + return flow_indr_dev_setup_offload(dev, TC_SETUP_FT, flowtable, bo, + nf_flow_table_indr_cleanup); } static int nf_flow_table_offload_cmd(struct flow_block_offload *bo, @@ -999,69 +1008,6 @@ int nf_flow_table_offload_setup(struct nf_flowtable *flowtable, } EXPORT_SYMBOL_GPL(nf_flow_table_offload_setup); -static void nf_flow_table_indr_block_ing_cmd(struct net_device *dev, - struct nf_flowtable *flowtable, - flow_indr_block_bind_cb_t *cb, - void *cb_priv, - enum flow_block_command cmd) -{ - struct netlink_ext_ack extack = {}; - struct flow_block_offload bo; - - if (!flowtable) - return; - - nf_flow_table_block_offload_init(&bo, dev_net(dev), cmd, flowtable, - &extack); - - cb(dev, cb_priv, TC_SETUP_FT, &bo); - - nf_flow_table_block_setup(flowtable, &bo, cmd); -} - -static void nf_flow_table_indr_block_cb_cmd(struct nf_flowtable *flowtable, - struct net_device *dev, - flow_indr_block_bind_cb_t *cb, - void *cb_priv, - enum flow_block_command cmd) -{ - if (!(flowtable->flags & NF_FLOWTABLE_HW_OFFLOAD)) - return; - - nf_flow_table_indr_block_ing_cmd(dev, flowtable, cb, cb_priv, cmd); -} - -static void nf_flow_table_indr_block_cb(struct net_device *dev, - flow_indr_block_bind_cb_t *cb, - void *cb_priv, - enum flow_block_command cmd) -{ - struct net *net = dev_net(dev); - struct nft_flowtable *nft_ft; - struct nft_table *table; - struct nft_hook *hook; - - mutex_lock(&net->nft.commit_mutex); - list_for_each_entry(table, &net->nft.tables, list) { - list_for_each_entry(nft_ft, &table->flowtables, list) { - list_for_each_entry(hook, &nft_ft->hook_list, list) { - if (hook->ops.dev != dev) - continue; - - nf_flow_table_indr_block_cb_cmd(&nft_ft->data, - dev, cb, - cb_priv, cmd); - } - } - } - mutex_unlock(&net->nft.commit_mutex); -} - -static struct flow_indr_block_entry block_ing_entry = { - .cb = nf_flow_table_indr_block_cb, - .list = LIST_HEAD_INIT(block_ing_entry.list), -}; - int nf_flow_table_offload_init(void) { nf_flow_offload_wq = alloc_workqueue("nf_flow_table_offload", @@ -1069,13 +1015,10 @@ int nf_flow_table_offload_init(void) if (!nf_flow_offload_wq) return -ENOMEM; - flow_indr_add_block_cb(&block_ing_entry); - return 0; } void nf_flow_table_offload_exit(void) { - flow_indr_del_block_cb(&block_ing_entry); destroy_workqueue(nf_flow_offload_wq); } diff --git a/net/netfilter/nf_internals.h b/net/netfilter/nf_internals.h index d6c43902ebd7..832ae64179f0 100644 --- a/net/netfilter/nf_internals.h +++ b/net/netfilter/nf_internals.h @@ -6,6 +6,23 @@ #include <linux/skbuff.h> #include <linux/netdevice.h> +/* nf_conntrack_netlink.c: applied on tuple filters */ +#define CTA_FILTER_F_CTA_IP_SRC (1 << 0) +#define CTA_FILTER_F_CTA_IP_DST (1 << 1) +#define CTA_FILTER_F_CTA_TUPLE_ZONE (1 << 2) +#define CTA_FILTER_F_CTA_PROTO_NUM (1 << 3) +#define CTA_FILTER_F_CTA_PROTO_SRC_PORT (1 << 4) +#define CTA_FILTER_F_CTA_PROTO_DST_PORT (1 << 5) +#define CTA_FILTER_F_CTA_PROTO_ICMP_TYPE (1 << 6) +#define CTA_FILTER_F_CTA_PROTO_ICMP_CODE (1 << 7) +#define CTA_FILTER_F_CTA_PROTO_ICMP_ID (1 << 8) +#define CTA_FILTER_F_CTA_PROTO_ICMPV6_TYPE (1 << 9) +#define CTA_FILTER_F_CTA_PROTO_ICMPV6_CODE (1 << 10) +#define CTA_FILTER_F_CTA_PROTO_ICMPV6_ID (1 << 11) +#define CTA_FILTER_F_MAX (1 << 12) +#define CTA_FILTER_F_ALL (CTA_FILTER_F_MAX-1) +#define CTA_FILTER_FLAG(ctattr) CTA_FILTER_F_ ## ctattr + /* nf_queue.c */ void nf_queue_nf_hook_drop(struct net *net); diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 3558e76e2733..073aa1051d43 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -1669,6 +1669,7 @@ static struct nft_hook *nft_netdev_hook_alloc(struct net *net, goto err_hook_dev; } hook->ops.dev = dev; + hook->inactive = false; return hook; @@ -1678,17 +1679,17 @@ err_hook_alloc: return ERR_PTR(err); } -static bool nft_hook_list_find(struct list_head *hook_list, - const struct nft_hook *this) +static struct nft_hook *nft_hook_list_find(struct list_head *hook_list, + const struct nft_hook *this) { struct nft_hook *hook; list_for_each_entry(hook, hook_list, list) { if (this->ops.dev == hook->ops.dev) - return true; + return hook; } - return false; + return NULL; } static int nf_tables_parse_netdev_hooks(struct net *net, @@ -1723,8 +1724,6 @@ static int nf_tables_parse_netdev_hooks(struct net *net, goto err_hook; } } - if (!n) - return -EINVAL; return 0; @@ -1761,6 +1760,9 @@ static int nft_chain_parse_netdev(struct net *net, hook_list); if (err < 0) return err; + + if (list_empty(hook_list)) + return -EINVAL; } else { return -EINVAL; } @@ -6178,50 +6180,77 @@ nft_flowtable_lookup_byhandle(const struct nft_table *table, return ERR_PTR(-ENOENT); } +struct nft_flowtable_hook { + u32 num; + int priority; + struct list_head list; +}; + static const struct nla_policy nft_flowtable_hook_policy[NFTA_FLOWTABLE_HOOK_MAX + 1] = { [NFTA_FLOWTABLE_HOOK_NUM] = { .type = NLA_U32 }, [NFTA_FLOWTABLE_HOOK_PRIORITY] = { .type = NLA_U32 }, [NFTA_FLOWTABLE_HOOK_DEVS] = { .type = NLA_NESTED }, }; -static int nf_tables_flowtable_parse_hook(const struct nft_ctx *ctx, - const struct nlattr *attr, - struct nft_flowtable *flowtable) +static int nft_flowtable_parse_hook(const struct nft_ctx *ctx, + const struct nlattr *attr, + struct nft_flowtable_hook *flowtable_hook, + struct nft_flowtable *flowtable, bool add) { struct nlattr *tb[NFTA_FLOWTABLE_HOOK_MAX + 1]; struct nft_hook *hook; int hooknum, priority; int err; + INIT_LIST_HEAD(&flowtable_hook->list); + err = nla_parse_nested_deprecated(tb, NFTA_FLOWTABLE_HOOK_MAX, attr, nft_flowtable_hook_policy, NULL); if (err < 0) return err; - if (!tb[NFTA_FLOWTABLE_HOOK_NUM] || - !tb[NFTA_FLOWTABLE_HOOK_PRIORITY] || - !tb[NFTA_FLOWTABLE_HOOK_DEVS]) - return -EINVAL; + if (add) { + if (!tb[NFTA_FLOWTABLE_HOOK_NUM] || + !tb[NFTA_FLOWTABLE_HOOK_PRIORITY]) + return -EINVAL; - hooknum = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_NUM])); - if (hooknum != NF_NETDEV_INGRESS) - return -EINVAL; + hooknum = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_NUM])); + if (hooknum != NF_NETDEV_INGRESS) + return -EOPNOTSUPP; - priority = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_PRIORITY])); + priority = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_PRIORITY])); - err = nf_tables_parse_netdev_hooks(ctx->net, - tb[NFTA_FLOWTABLE_HOOK_DEVS], - &flowtable->hook_list); - if (err < 0) - return err; + flowtable_hook->priority = priority; + flowtable_hook->num = hooknum; + } else { + if (tb[NFTA_FLOWTABLE_HOOK_NUM]) { + hooknum = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_NUM])); + if (hooknum != flowtable->hooknum) + return -EOPNOTSUPP; + } - flowtable->hooknum = hooknum; - flowtable->data.priority = priority; + if (tb[NFTA_FLOWTABLE_HOOK_PRIORITY]) { + priority = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_PRIORITY])); + if (priority != flowtable->data.priority) + return -EOPNOTSUPP; + } - list_for_each_entry(hook, &flowtable->hook_list, list) { + flowtable_hook->priority = flowtable->data.priority; + flowtable_hook->num = flowtable->hooknum; + } + + if (tb[NFTA_FLOWTABLE_HOOK_DEVS]) { + err = nf_tables_parse_netdev_hooks(ctx->net, + tb[NFTA_FLOWTABLE_HOOK_DEVS], + &flowtable_hook->list); + if (err < 0) + return err; + } + + list_for_each_entry(hook, &flowtable_hook->list, list) { hook->ops.pf = NFPROTO_NETDEV; - hook->ops.hooknum = hooknum; - hook->ops.priority = priority; + hook->ops.hooknum = flowtable_hook->num; + hook->ops.priority = flowtable_hook->priority; hook->ops.priv = &flowtable->data; hook->ops.hook = flowtable->data.type->hook; } @@ -6270,23 +6299,24 @@ static void nft_unregister_flowtable_hook(struct net *net, } static void nft_unregister_flowtable_net_hooks(struct net *net, - struct nft_flowtable *flowtable) + struct list_head *hook_list) { struct nft_hook *hook; - list_for_each_entry(hook, &flowtable->hook_list, list) + list_for_each_entry(hook, hook_list, list) nf_unregister_net_hook(net, &hook->ops); } static int nft_register_flowtable_net_hooks(struct net *net, struct nft_table *table, + struct list_head *hook_list, struct nft_flowtable *flowtable) { struct nft_hook *hook, *hook2, *next; struct nft_flowtable *ft; int err, i = 0; - list_for_each_entry(hook, &flowtable->hook_list, list) { + list_for_each_entry(hook, hook_list, list) { list_for_each_entry(ft, &table->flowtables, list) { list_for_each_entry(hook2, &ft->hook_list, list) { if (hook->ops.dev == hook2->ops.dev && @@ -6317,7 +6347,7 @@ static int nft_register_flowtable_net_hooks(struct net *net, return 0; err_unregister_net_hooks: - list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { + list_for_each_entry_safe(hook, next, hook_list, list) { if (i-- <= 0) break; @@ -6329,6 +6359,72 @@ err_unregister_net_hooks: return err; } +static void nft_flowtable_hooks_destroy(struct list_head *hook_list) +{ + struct nft_hook *hook, *next; + + list_for_each_entry_safe(hook, next, hook_list, list) { + list_del_rcu(&hook->list); + kfree_rcu(hook, rcu); + } +} + +static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh, + struct nft_flowtable *flowtable) +{ + const struct nlattr * const *nla = ctx->nla; + struct nft_flowtable_hook flowtable_hook; + struct nft_hook *hook, *next; + struct nft_trans *trans; + bool unregister = false; + int err; + + err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], + &flowtable_hook, flowtable, false); + if (err < 0) + return err; + + list_for_each_entry_safe(hook, next, &flowtable_hook.list, list) { + if (nft_hook_list_find(&flowtable->hook_list, hook)) { + list_del(&hook->list); + kfree(hook); + } + } + + err = nft_register_flowtable_net_hooks(ctx->net, ctx->table, + &flowtable_hook.list, flowtable); + if (err < 0) + goto err_flowtable_update_hook; + + trans = nft_trans_alloc(ctx, NFT_MSG_NEWFLOWTABLE, + sizeof(struct nft_trans_flowtable)); + if (!trans) { + unregister = true; + err = -ENOMEM; + goto err_flowtable_update_hook; + } + + nft_trans_flowtable(trans) = flowtable; + nft_trans_flowtable_update(trans) = true; + INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans)); + list_splice(&flowtable_hook.list, &nft_trans_flowtable_hooks(trans)); + + list_add_tail(&trans->list, &ctx->net->nft.commit_list); + + return 0; + +err_flowtable_update_hook: + list_for_each_entry_safe(hook, next, &flowtable_hook.list, list) { + if (unregister) + nft_unregister_flowtable_hook(ctx->net, flowtable, hook); + list_del_rcu(&hook->list); + kfree_rcu(hook, rcu); + } + + return err; + +} + static int nf_tables_newflowtable(struct net *net, struct sock *nlsk, struct sk_buff *skb, const struct nlmsghdr *nlh, @@ -6336,6 +6432,7 @@ static int nf_tables_newflowtable(struct net *net, struct sock *nlsk, struct netlink_ext_ack *extack) { const struct nfgenmsg *nfmsg = nlmsg_data(nlh); + struct nft_flowtable_hook flowtable_hook; const struct nf_flowtable_type *type; u8 genmask = nft_genmask_next(net); int family = nfmsg->nfgen_family; @@ -6371,7 +6468,9 @@ static int nf_tables_newflowtable(struct net *net, struct sock *nlsk, return -EEXIST; } - return 0; + nft_ctx_init(&ctx, net, skb, nlh, family, table, NULL, nla); + + return nft_flowtable_update(&ctx, nlh, flowtable); } nft_ctx_init(&ctx, net, skb, nlh, family, table, NULL, nla); @@ -6409,17 +6508,20 @@ static int nf_tables_newflowtable(struct net *net, struct sock *nlsk, if (err < 0) goto err3; - err = nf_tables_flowtable_parse_hook(&ctx, nla[NFTA_FLOWTABLE_HOOK], - flowtable); + err = nft_flowtable_parse_hook(&ctx, nla[NFTA_FLOWTABLE_HOOK], + &flowtable_hook, flowtable, true); if (err < 0) goto err4; - err = nft_register_flowtable_net_hooks(ctx.net, table, flowtable); + list_splice(&flowtable_hook.list, &flowtable->hook_list); + flowtable->data.priority = flowtable_hook.priority; + flowtable->hooknum = flowtable_hook.num; + + err = nft_register_flowtable_net_hooks(ctx.net, table, + &flowtable->hook_list, + flowtable); if (err < 0) { - list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { - list_del_rcu(&hook->list); - kfree_rcu(hook, rcu); - } + nft_flowtable_hooks_destroy(&flowtable->hook_list); goto err4; } @@ -6448,6 +6550,51 @@ err1: return err; } +static int nft_delflowtable_hook(struct nft_ctx *ctx, + struct nft_flowtable *flowtable) +{ + const struct nlattr * const *nla = ctx->nla; + struct nft_flowtable_hook flowtable_hook; + struct nft_hook *this, *next, *hook; + struct nft_trans *trans; + int err; + + err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], + &flowtable_hook, flowtable, false); + if (err < 0) + return err; + + list_for_each_entry_safe(this, next, &flowtable_hook.list, list) { + hook = nft_hook_list_find(&flowtable->hook_list, this); + if (!hook) { + err = -ENOENT; + goto err_flowtable_del_hook; + } + hook->inactive = true; + list_del(&this->list); + kfree(this); + } + + trans = nft_trans_alloc(ctx, NFT_MSG_DELFLOWTABLE, + sizeof(struct nft_trans_flowtable)); + if (!trans) + return -ENOMEM; + + nft_trans_flowtable(trans) = flowtable; + nft_trans_flowtable_update(trans) = true; + INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans)); + + list_add_tail(&trans->list, &ctx->net->nft.commit_list); + + return 0; + +err_flowtable_del_hook: + list_for_each_entry(hook, &flowtable_hook.list, list) + hook->inactive = false; + + return err; +} + static int nf_tables_delflowtable(struct net *net, struct sock *nlsk, struct sk_buff *skb, const struct nlmsghdr *nlh, @@ -6486,20 +6633,25 @@ static int nf_tables_delflowtable(struct net *net, struct sock *nlsk, NL_SET_BAD_ATTR(extack, attr); return PTR_ERR(flowtable); } + + nft_ctx_init(&ctx, net, skb, nlh, family, table, NULL, nla); + + if (nla[NFTA_FLOWTABLE_HOOK]) + return nft_delflowtable_hook(&ctx, flowtable); + if (flowtable->use > 0) { NL_SET_BAD_ATTR(extack, attr); return -EBUSY; } - nft_ctx_init(&ctx, net, skb, nlh, family, table, NULL, nla); - return nft_delflowtable(&ctx, flowtable); } static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net, u32 portid, u32 seq, int event, u32 flags, int family, - struct nft_flowtable *flowtable) + struct nft_flowtable *flowtable, + struct list_head *hook_list) { struct nlattr *nest, *nest_devs; struct nfgenmsg *nfmsg; @@ -6535,7 +6687,7 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net, if (!nest_devs) goto nla_put_failure; - list_for_each_entry_rcu(hook, &flowtable->hook_list, list) { + list_for_each_entry_rcu(hook, hook_list, list) { if (nla_put_string(skb, NFTA_DEVICE_NAME, hook->ops.dev->name)) goto nla_put_failure; } @@ -6588,7 +6740,9 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb, cb->nlh->nlmsg_seq, NFT_MSG_NEWFLOWTABLE, NLM_F_MULTI | NLM_F_APPEND, - table->family, flowtable) < 0) + table->family, + flowtable, + &flowtable->hook_list) < 0) goto done; nl_dump_check_consistent(cb, nlmsg_hdr(skb)); @@ -6685,7 +6839,7 @@ static int nf_tables_getflowtable(struct net *net, struct sock *nlsk, err = nf_tables_fill_flowtable_info(skb2, net, NETLINK_CB(skb).portid, nlh->nlmsg_seq, NFT_MSG_NEWFLOWTABLE, 0, family, - flowtable); + flowtable, &flowtable->hook_list); if (err < 0) goto err; @@ -6697,6 +6851,7 @@ err: static void nf_tables_flowtable_notify(struct nft_ctx *ctx, struct nft_flowtable *flowtable, + struct list_head *hook_list, int event) { struct sk_buff *skb; @@ -6712,7 +6867,7 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx, err = nf_tables_fill_flowtable_info(skb, ctx->net, ctx->portid, ctx->seq, event, 0, - ctx->family, flowtable); + ctx->family, flowtable, hook_list); if (err < 0) { kfree_skb(skb); goto err; @@ -7098,7 +7253,10 @@ static void nft_commit_release(struct nft_trans *trans) nft_obj_destroy(&trans->ctx, nft_trans_obj(trans)); break; case NFT_MSG_DELFLOWTABLE: - nf_tables_flowtable_destroy(nft_trans_flowtable(trans)); + if (nft_trans_flowtable_update(trans)) + nft_flowtable_hooks_destroy(&nft_trans_flowtable_hooks(trans)); + else + nf_tables_flowtable_destroy(nft_trans_flowtable(trans)); break; } @@ -7259,6 +7417,17 @@ static void nft_chain_del(struct nft_chain *chain) list_del_rcu(&chain->list); } +static void nft_flowtable_hooks_del(struct nft_flowtable *flowtable, + struct list_head *hook_list) +{ + struct nft_hook *hook, *next; + + list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { + if (hook->inactive) + list_move(&hook->list, hook_list); + } +} + static void nf_tables_module_autoload_cleanup(struct net *net) { struct nft_module_request *req, *next; @@ -7467,19 +7636,41 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) NFT_MSG_DELOBJ); break; case NFT_MSG_NEWFLOWTABLE: - nft_clear(net, nft_trans_flowtable(trans)); - nf_tables_flowtable_notify(&trans->ctx, - nft_trans_flowtable(trans), - NFT_MSG_NEWFLOWTABLE); + if (nft_trans_flowtable_update(trans)) { + nf_tables_flowtable_notify(&trans->ctx, + nft_trans_flowtable(trans), + &nft_trans_flowtable_hooks(trans), + NFT_MSG_NEWFLOWTABLE); + list_splice(&nft_trans_flowtable_hooks(trans), + &nft_trans_flowtable(trans)->hook_list); + } else { + nft_clear(net, nft_trans_flowtable(trans)); + nf_tables_flowtable_notify(&trans->ctx, + nft_trans_flowtable(trans), + &nft_trans_flowtable(trans)->hook_list, + NFT_MSG_NEWFLOWTABLE); + } nft_trans_destroy(trans); break; case NFT_MSG_DELFLOWTABLE: - list_del_rcu(&nft_trans_flowtable(trans)->list); - nf_tables_flowtable_notify(&trans->ctx, - nft_trans_flowtable(trans), - NFT_MSG_DELFLOWTABLE); - nft_unregister_flowtable_net_hooks(net, - nft_trans_flowtable(trans)); + if (nft_trans_flowtable_update(trans)) { + nft_flowtable_hooks_del(nft_trans_flowtable(trans), + &nft_trans_flowtable_hooks(trans)); + nf_tables_flowtable_notify(&trans->ctx, + nft_trans_flowtable(trans), + &nft_trans_flowtable_hooks(trans), + NFT_MSG_DELFLOWTABLE); + nft_unregister_flowtable_net_hooks(net, + &nft_trans_flowtable_hooks(trans)); + } else { + list_del_rcu(&nft_trans_flowtable(trans)->list); + nf_tables_flowtable_notify(&trans->ctx, + nft_trans_flowtable(trans), + &nft_trans_flowtable(trans)->hook_list, + NFT_MSG_DELFLOWTABLE); + nft_unregister_flowtable_net_hooks(net, + &nft_trans_flowtable(trans)->hook_list); + } break; } } @@ -7528,7 +7719,10 @@ static void nf_tables_abort_release(struct nft_trans *trans) nft_obj_destroy(&trans->ctx, nft_trans_obj(trans)); break; case NFT_MSG_NEWFLOWTABLE: - nf_tables_flowtable_destroy(nft_trans_flowtable(trans)); + if (nft_trans_flowtable_update(trans)) + nft_flowtable_hooks_destroy(&nft_trans_flowtable_hooks(trans)); + else + nf_tables_flowtable_destroy(nft_trans_flowtable(trans)); break; } kfree(trans); @@ -7538,6 +7732,7 @@ static int __nf_tables_abort(struct net *net, bool autoload) { struct nft_trans *trans, *next; struct nft_trans_elem *te; + struct nft_hook *hook; list_for_each_entry_safe_reverse(trans, next, &net->nft.commit_list, list) { @@ -7635,14 +7830,24 @@ static int __nf_tables_abort(struct net *net, bool autoload) nft_trans_destroy(trans); break; case NFT_MSG_NEWFLOWTABLE: - trans->ctx.table->use--; - list_del_rcu(&nft_trans_flowtable(trans)->list); - nft_unregister_flowtable_net_hooks(net, - nft_trans_flowtable(trans)); + if (nft_trans_flowtable_update(trans)) { + nft_unregister_flowtable_net_hooks(net, + &nft_trans_flowtable_hooks(trans)); + } else { + trans->ctx.table->use--; + list_del_rcu(&nft_trans_flowtable(trans)->list); + nft_unregister_flowtable_net_hooks(net, + &nft_trans_flowtable(trans)->hook_list); + } break; case NFT_MSG_DELFLOWTABLE: - trans->ctx.table->use++; - nft_clear(trans->ctx.net, nft_trans_flowtable(trans)); + if (nft_trans_flowtable_update(trans)) { + list_for_each_entry(hook, &nft_trans_flowtable(trans)->hook_list, list) + hook->inactive = false; + } else { + trans->ctx.table->use++; + nft_clear(trans->ctx.net, nft_trans_flowtable(trans)); + } nft_trans_destroy(trans); break; } diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c index 954bccb7f32a..185fc82c99aa 100644 --- a/net/netfilter/nf_tables_offload.c +++ b/net/netfilter/nf_tables_offload.c @@ -285,40 +285,41 @@ static int nft_block_offload_cmd(struct nft_base_chain *chain, return nft_block_setup(chain, &bo, cmd); } -static void nft_indr_block_ing_cmd(struct net_device *dev, - struct nft_base_chain *chain, - flow_indr_block_bind_cb_t *cb, - void *cb_priv, - enum flow_block_command cmd) +static void nft_indr_block_cleanup(struct flow_block_cb *block_cb) { + struct nft_base_chain *basechain = block_cb->indr.data; + struct net_device *dev = block_cb->indr.dev; struct netlink_ext_ack extack = {}; + struct net *net = dev_net(dev); struct flow_block_offload bo; - if (!chain) - return; - - nft_flow_block_offload_init(&bo, dev_net(dev), cmd, chain, &extack); - - cb(dev, cb_priv, TC_SETUP_BLOCK, &bo); - - nft_block_setup(chain, &bo, cmd); + nft_flow_block_offload_init(&bo, dev_net(dev), FLOW_BLOCK_UNBIND, + basechain, &extack); + mutex_lock(&net->nft.commit_mutex); + list_move(&block_cb->list, &bo.cb_list); + nft_flow_offload_unbind(&bo, basechain); + mutex_unlock(&net->nft.commit_mutex); } -static int nft_indr_block_offload_cmd(struct nft_base_chain *chain, +static int nft_indr_block_offload_cmd(struct nft_base_chain *basechain, struct net_device *dev, enum flow_block_command cmd) { struct netlink_ext_ack extack = {}; struct flow_block_offload bo; + int err; - nft_flow_block_offload_init(&bo, dev_net(dev), cmd, chain, &extack); + nft_flow_block_offload_init(&bo, dev_net(dev), cmd, basechain, &extack); - flow_indr_block_call(dev, &bo, cmd, TC_SETUP_BLOCK); + err = flow_indr_dev_setup_offload(dev, TC_SETUP_BLOCK, basechain, &bo, + nft_indr_block_cleanup); + if (err < 0) + return err; if (list_empty(&bo.cb_list)) return -EOPNOTSUPP; - return nft_block_setup(chain, &bo, cmd); + return nft_block_setup(basechain, &bo, cmd); } #define FLOW_SETUP_BLOCK TC_SETUP_BLOCK @@ -555,24 +556,6 @@ static struct nft_chain *__nft_offload_get_chain(struct net_device *dev) return NULL; } -static void nft_indr_block_cb(struct net_device *dev, - flow_indr_block_bind_cb_t *cb, void *cb_priv, - enum flow_block_command cmd) -{ - struct net *net = dev_net(dev); - struct nft_chain *chain; - - mutex_lock(&net->nft.commit_mutex); - chain = __nft_offload_get_chain(dev); - if (chain && chain->flags & NFT_CHAIN_HW_OFFLOAD) { - struct nft_base_chain *basechain; - - basechain = nft_base_chain(chain); - nft_indr_block_ing_cmd(dev, basechain, cb, cb_priv, cmd); - } - mutex_unlock(&net->nft.commit_mutex); -} - static int nft_offload_netdev_event(struct notifier_block *this, unsigned long event, void *ptr) { @@ -594,30 +577,16 @@ static int nft_offload_netdev_event(struct notifier_block *this, return NOTIFY_DONE; } -static struct flow_indr_block_entry block_ing_entry = { - .cb = nft_indr_block_cb, - .list = LIST_HEAD_INIT(block_ing_entry.list), -}; - static struct notifier_block nft_offload_netdev_notifier = { .notifier_call = nft_offload_netdev_event, }; int nft_offload_init(void) { - int err; - - err = register_netdevice_notifier(&nft_offload_netdev_notifier); - if (err < 0) - return err; - - flow_indr_add_block_cb(&block_ing_entry); - - return 0; + return register_netdevice_notifier(&nft_offload_netdev_notifier); } void nft_offload_exit(void) { - flow_indr_del_block_cb(&block_ing_entry); unregister_netdevice_notifier(&nft_offload_netdev_notifier); } diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c index a5f294aa8e4c..5b0d0a77379c 100644 --- a/net/netfilter/nfnetlink_cthelper.c +++ b/net/netfilter/nfnetlink_cthelper.c @@ -103,7 +103,7 @@ nfnl_cthelper_from_nlattr(struct nlattr *attr, struct nf_conn *ct) if (help->helper->data_len == 0) return -EINVAL; - nla_memcpy(help->data, nla_data(attr), sizeof(help->data)); + nla_memcpy(help->data, attr, sizeof(help->data)); return 0; } @@ -240,6 +240,7 @@ nfnl_cthelper_create(const struct nlattr * const tb[], ret = -ENOMEM; goto err2; } + helper->data_len = size; helper->flags |= NF_CT_HELPER_F_USERSPACE; memcpy(&helper->tuple, tuple, sizeof(struct nf_conntrack_tuple)); diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c index 3ca196fc7f9b..d8252fdab851 100644 --- a/net/qrtr/ns.c +++ b/net/qrtr/ns.c @@ -714,6 +714,10 @@ void qrtr_ns_init(void) goto err_sock; } + qrtr_ns.workqueue = alloc_workqueue("qrtr_ns_handler", WQ_UNBOUND, 1); + if (!qrtr_ns.workqueue) + goto err_sock; + qrtr_ns.sock->sk->sk_data_ready = qrtr_ns_data_ready; sq.sq_port = QRTR_PORT_CTRL; @@ -722,17 +726,13 @@ void qrtr_ns_init(void) ret = kernel_bind(qrtr_ns.sock, (struct sockaddr *)&sq, sizeof(sq)); if (ret < 0) { pr_err("failed to bind to socket\n"); - goto err_sock; + goto err_wq; } qrtr_ns.bcast_sq.sq_family = AF_QIPCRTR; qrtr_ns.bcast_sq.sq_node = QRTR_NODE_BCAST; qrtr_ns.bcast_sq.sq_port = QRTR_PORT_CTRL; - qrtr_ns.workqueue = alloc_workqueue("qrtr_ns_handler", WQ_UNBOUND, 1); - if (!qrtr_ns.workqueue) - goto err_sock; - ret = say_hello(&qrtr_ns.bcast_sq); if (ret < 0) goto err_wq; diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c index 9adff83b523b..e29f0f45d688 100644 --- a/net/sched/act_ct.c +++ b/net/sched/act_ct.c @@ -200,6 +200,9 @@ static int tcf_ct_flow_table_add_action_nat(struct net *net, const struct nf_conntrack_tuple *tuple = &ct->tuplehash[dir].tuple; struct nf_conntrack_tuple target; + if (!(ct->status & IPS_NAT_MASK)) + return 0; + nf_ct_invert_tuple(&target, &ct->tuplehash[!dir].tuple); switch (tuple->src.l3num) { diff --git a/net/sched/act_gate.c b/net/sched/act_gate.c index 35fc48795541..9c628591f452 100644 --- a/net/sched/act_gate.c +++ b/net/sched/act_gate.c @@ -331,6 +331,10 @@ static int tcf_gate_init(struct net *net, struct nlattr *nla, tcf_idr_release(*a, bind); return -EEXIST; } + if (ret == ACT_P_CREATED) { + to_gate(*a)->param.tcfg_clockid = -1; + INIT_LIST_HEAD(&(to_gate(*a)->param.entries)); + } if (tb[TCA_GATE_PRIORITY]) prio = nla_get_s32(tb[TCA_GATE_PRIORITY]); @@ -377,7 +381,6 @@ static int tcf_gate_init(struct net *net, struct nlattr *nla, goto chain_put; } - INIT_LIST_HEAD(&p->entries); if (tb[TCA_GATE_ENTRY_LIST]) { err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack); if (err < 0) @@ -449,9 +452,9 @@ static void tcf_gate_cleanup(struct tc_action *a) struct tcf_gate *gact = to_gate(a); struct tcf_gate_params *p; - hrtimer_cancel(&gact->hitimer); - p = &gact->param; + if (p->tcfg_clockid != -1) + hrtimer_cancel(&gact->hitimer); release_entry_list(&p->entries); } diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 752d608f4442..a00a203b2ef5 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -621,96 +621,42 @@ static void tcf_chain_flush(struct tcf_chain *chain, bool rtnl_held) static int tcf_block_setup(struct tcf_block *block, struct flow_block_offload *bo); -static void tc_indr_block_cmd(struct net_device *dev, struct tcf_block *block, - flow_indr_block_bind_cb_t *cb, void *cb_priv, - enum flow_block_command command, bool ingress) -{ - struct flow_block_offload bo = { - .command = command, - .binder_type = ingress ? - FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS : - FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS, - .net = dev_net(dev), - .block_shared = tcf_block_non_null_shared(block), - }; - INIT_LIST_HEAD(&bo.cb_list); - - if (!block) - return; - - bo.block = &block->flow_block; - - down_write(&block->cb_lock); - cb(dev, cb_priv, TC_SETUP_BLOCK, &bo); - - tcf_block_setup(block, &bo); - up_write(&block->cb_lock); -} - -static struct tcf_block *tc_dev_block(struct net_device *dev, bool ingress) -{ - const struct Qdisc_class_ops *cops; - const struct Qdisc_ops *ops; - struct Qdisc *qdisc; - - if (!dev_ingress_queue(dev)) - return NULL; - - qdisc = dev_ingress_queue(dev)->qdisc_sleeping; - if (!qdisc) - return NULL; - - ops = qdisc->ops; - if (!ops) - return NULL; - - if (!ingress && !strcmp("ingress", ops->id)) - return NULL; - - cops = ops->cl_ops; - if (!cops) - return NULL; - - if (!cops->tcf_block) - return NULL; - - return cops->tcf_block(qdisc, - ingress ? TC_H_MIN_INGRESS : TC_H_MIN_EGRESS, - NULL); +static void tcf_block_offload_init(struct flow_block_offload *bo, + struct net_device *dev, + enum flow_block_command command, + enum flow_block_binder_type binder_type, + struct flow_block *flow_block, + bool shared, struct netlink_ext_ack *extack) +{ + bo->net = dev_net(dev); + bo->command = command; + bo->binder_type = binder_type; + bo->block = flow_block; + bo->block_shared = shared; + bo->extack = extack; + INIT_LIST_HEAD(&bo->cb_list); } -static void tc_indr_block_get_and_cmd(struct net_device *dev, - flow_indr_block_bind_cb_t *cb, - void *cb_priv, - enum flow_block_command command) -{ - struct tcf_block *block; - - block = tc_dev_block(dev, true); - tc_indr_block_cmd(dev, block, cb, cb_priv, command, true); - - block = tc_dev_block(dev, false); - tc_indr_block_cmd(dev, block, cb, cb_priv, command, false); -} +static void tcf_block_unbind(struct tcf_block *block, + struct flow_block_offload *bo); -static void tc_indr_block_call(struct tcf_block *block, - struct net_device *dev, - struct tcf_block_ext_info *ei, - enum flow_block_command command, - struct netlink_ext_ack *extack) +static void tc_block_indr_cleanup(struct flow_block_cb *block_cb) { - struct flow_block_offload bo = { - .command = command, - .binder_type = ei->binder_type, - .net = dev_net(dev), - .block = &block->flow_block, - .block_shared = tcf_block_shared(block), - .extack = extack, - }; - INIT_LIST_HEAD(&bo.cb_list); + struct tcf_block *block = block_cb->indr.data; + struct net_device *dev = block_cb->indr.dev; + struct netlink_ext_ack extack = {}; + struct flow_block_offload bo; - flow_indr_block_call(dev, &bo, command, TC_SETUP_BLOCK); - tcf_block_setup(block, &bo); + tcf_block_offload_init(&bo, dev, FLOW_BLOCK_UNBIND, + block_cb->indr.binder_type, + &block->flow_block, tcf_block_shared(block), + &extack); + down_write(&block->cb_lock); + list_move(&block_cb->list, &bo.cb_list); + up_write(&block->cb_lock); + rtnl_lock(); + tcf_block_unbind(block, &bo); + rtnl_unlock(); } static bool tcf_block_offload_in_use(struct tcf_block *block) @@ -727,15 +673,16 @@ static int tcf_block_offload_cmd(struct tcf_block *block, struct flow_block_offload bo = {}; int err; - bo.net = dev_net(dev); - bo.command = command; - bo.binder_type = ei->binder_type; - bo.block = &block->flow_block; - bo.block_shared = tcf_block_shared(block); - bo.extack = extack; - INIT_LIST_HEAD(&bo.cb_list); + tcf_block_offload_init(&bo, dev, command, ei->binder_type, + &block->flow_block, tcf_block_shared(block), + extack); + + if (dev->netdev_ops->ndo_setup_tc) + err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); + else + err = flow_indr_dev_setup_offload(dev, TC_SETUP_BLOCK, block, + &bo, tc_block_indr_cleanup); - err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); if (err < 0) { if (err != -EOPNOTSUPP) NL_SET_ERR_MSG(extack, "Driver ndo_setup_tc failed"); @@ -753,13 +700,13 @@ static int tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q, int err; down_write(&block->cb_lock); - if (!dev->netdev_ops->ndo_setup_tc) - goto no_offload_dev_inc; /* If tc offload feature is disabled and the block we try to bind * to already has some offloaded filters, forbid to bind. */ - if (!tc_can_offload(dev) && tcf_block_offload_in_use(block)) { + if (dev->netdev_ops->ndo_setup_tc && + !tc_can_offload(dev) && + tcf_block_offload_in_use(block)) { NL_SET_ERR_MSG(extack, "Bind to offloaded block failed as dev has offload disabled"); err = -EOPNOTSUPP; goto err_unlock; @@ -771,18 +718,15 @@ static int tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q, if (err) goto err_unlock; - tc_indr_block_call(block, dev, ei, FLOW_BLOCK_BIND, extack); up_write(&block->cb_lock); return 0; no_offload_dev_inc: - if (tcf_block_offload_in_use(block)) { - err = -EOPNOTSUPP; + if (tcf_block_offload_in_use(block)) goto err_unlock; - } + err = 0; block->nooffloaddevcnt++; - tc_indr_block_call(block, dev, ei, FLOW_BLOCK_BIND, extack); err_unlock: up_write(&block->cb_lock); return err; @@ -795,10 +739,6 @@ static void tcf_block_offload_unbind(struct tcf_block *block, struct Qdisc *q, int err; down_write(&block->cb_lock); - tc_indr_block_call(block, dev, ei, FLOW_BLOCK_UNBIND, NULL); - - if (!dev->netdev_ops->ndo_setup_tc) - goto no_offload_dev_dec; err = tcf_block_offload_cmd(block, dev, ei, FLOW_BLOCK_UNBIND, NULL); if (err == -EOPNOTSUPP) goto no_offload_dev_dec; @@ -3824,11 +3764,6 @@ static struct pernet_operations tcf_net_ops = { .size = sizeof(struct tcf_net), }; -static struct flow_indr_block_entry block_entry = { - .cb = tc_indr_block_get_and_cmd, - .list = LIST_HEAD_INIT(block_entry.list), -}; - static int __init tc_filter_init(void) { int err; @@ -3841,8 +3776,6 @@ static int __init tc_filter_init(void) if (err) goto err_register_pernet_subsys; - flow_indr_add_block_cb(&block_entry); - rtnl_register(PF_UNSPEC, RTM_NEWTFILTER, tc_new_tfilter, NULL, RTNL_FLAG_DOIT_UNLOCKED); rtnl_register(PF_UNSPEC, RTM_DELTFILTER, tc_del_tfilter, NULL, diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 96f5999281e0..b2da37286082 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -272,14 +272,16 @@ static struct cls_fl_filter *fl_lookup_range(struct fl_flow_mask *mask, return NULL; } -static struct cls_fl_filter *fl_lookup(struct fl_flow_mask *mask, - struct fl_flow_key *mkey, - struct fl_flow_key *key) +static noinline_for_stack +struct cls_fl_filter *fl_mask_lookup(struct fl_flow_mask *mask, struct fl_flow_key *key) { + struct fl_flow_key mkey; + + fl_set_masked_key(&mkey, key, mask); if ((mask->flags & TCA_FLOWER_MASK_FLAGS_RANGE)) - return fl_lookup_range(mask, mkey, key); + return fl_lookup_range(mask, &mkey, key); - return __fl_lookup(mask, mkey); + return __fl_lookup(mask, &mkey); } static u16 fl_ct_info_to_flower_map[] = { @@ -299,7 +301,6 @@ static int fl_classify(struct sk_buff *skb, const struct tcf_proto *tp, struct tcf_result *res) { struct cls_fl_head *head = rcu_dereference_bh(tp->root); - struct fl_flow_key skb_mkey; struct fl_flow_key skb_key; struct fl_flow_mask *mask; struct cls_fl_filter *f; @@ -319,9 +320,7 @@ static int fl_classify(struct sk_buff *skb, const struct tcf_proto *tp, ARRAY_SIZE(fl_ct_info_to_flower_map)); skb_flow_dissect(skb, &mask->dissector, &skb_key, 0); - fl_set_masked_key(&skb_mkey, &skb_key, mask); - - f = fl_lookup(mask, &skb_mkey, &skb_key); + f = fl_mask_lookup(mask, &skb_key); if (f && !tc_skip_sw(f->flags)) { *res = f->res; return tcf_exts_exec(skb, &f->exts, res); @@ -728,11 +727,6 @@ erspan_opt_policy[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_MAX + 1] = { }; static const struct nla_policy -mpls_opts_policy[TCA_FLOWER_KEY_MPLS_OPTS_MAX + 1] = { - [TCA_FLOWER_KEY_MPLS_OPTS_LSE] = { .type = NLA_NESTED }, -}; - -static const struct nla_policy mpls_stack_entry_policy[TCA_FLOWER_KEY_MPLS_OPT_LSE_MAX + 1] = { [TCA_FLOWER_KEY_MPLS_OPT_LSE_DEPTH] = { .type = NLA_U8 }, [TCA_FLOWER_KEY_MPLS_OPT_LSE_TTL] = { .type = NLA_U8 }, diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c index a9da8776bf5b..fb760cee824e 100644 --- a/net/sched/sch_fq_pie.c +++ b/net/sched/sch_fq_pie.c @@ -297,9 +297,9 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt, goto flow_error; } q->flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]); - if (!q->flows_cnt || q->flows_cnt > 65536) { + if (!q->flows_cnt || q->flows_cnt >= 65536) { NL_SET_ERR_MSG_MOD(extack, - "Number of flows must be < 65536"); + "Number of flows must range in [1..65535]"); goto flow_error; } } diff --git a/net/sctp/Kconfig b/net/sctp/Kconfig index 6e2eb1dd64ed..68934438ee19 100644 --- a/net/sctp/Kconfig +++ b/net/sctp/Kconfig @@ -31,7 +31,7 @@ menuconfig IP_SCTP homing at either or both ends of an association." To compile this protocol support as a module, choose M here: the - module will be called sctp. Debug messages are handeled by the + module will be called sctp. Debug messages are handled by the kernel's dynamic debugging framework. If in doubt, say N. diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c index f0640306e77f..0c3d2b4d7321 100644 --- a/net/sctp/ulpevent.c +++ b/net/sctp/ulpevent.c @@ -343,6 +343,9 @@ void sctp_ulpevent_notify_peer_addr_change(struct sctp_transport *transport, struct sockaddr_storage addr; struct sctp_ulpevent *event; + if (asoc->state < SCTP_STATE_ESTABLISHED) + return; + memset(&addr, 0, sizeof(struct sockaddr_storage)); memcpy(&addr, &transport->ipaddr, transport->af_specific->sockaddr_len); diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 2d399b6c4075..8c2763eb6aae 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -206,10 +206,12 @@ static void tls_decrypt_done(struct crypto_async_request *req, int err) kfree(aead_req); + spin_lock_bh(&ctx->decrypt_compl_lock); pending = atomic_dec_return(&ctx->decrypt_pending); - if (!pending && READ_ONCE(ctx->async_notify)) + if (!pending && ctx->async_notify) complete(&ctx->async_wait.completion); + spin_unlock_bh(&ctx->decrypt_compl_lock); } static int tls_do_decryption(struct sock *sk, @@ -467,10 +469,12 @@ static void tls_encrypt_done(struct crypto_async_request *req, int err) ready = true; } + spin_lock_bh(&ctx->encrypt_compl_lock); pending = atomic_dec_return(&ctx->encrypt_pending); - if (!pending && READ_ONCE(ctx->async_notify)) + if (!pending && ctx->async_notify) complete(&ctx->async_wait.completion); + spin_unlock_bh(&ctx->encrypt_compl_lock); if (!ready) return; @@ -929,6 +933,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) int num_zc = 0; int orig_size; int ret = 0; + int pending; if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL)) return -EOPNOTSUPP; @@ -1095,13 +1100,19 @@ trim_sgl: goto send_end; } else if (num_zc) { /* Wait for pending encryptions to get completed */ - smp_store_mb(ctx->async_notify, true); + spin_lock_bh(&ctx->encrypt_compl_lock); + ctx->async_notify = true; - if (atomic_read(&ctx->encrypt_pending)) + pending = atomic_read(&ctx->encrypt_pending); + spin_unlock_bh(&ctx->encrypt_compl_lock); + if (pending) crypto_wait_req(-EINPROGRESS, &ctx->async_wait); else reinit_completion(&ctx->async_wait.completion); + /* There can be no concurrent accesses, since we have no + * pending encrypt operations + */ WRITE_ONCE(ctx->async_notify, false); if (ctx->async_wait.err) { @@ -1732,6 +1743,7 @@ int tls_sw_recvmsg(struct sock *sk, bool is_kvec = iov_iter_is_kvec(&msg->msg_iter); bool is_peek = flags & MSG_PEEK; int num_async = 0; + int pending; flags |= nonblock; @@ -1894,8 +1906,11 @@ pick_next_record: recv_end: if (num_async) { /* Wait for all previously submitted records to be decrypted */ - smp_store_mb(ctx->async_notify, true); - if (atomic_read(&ctx->decrypt_pending)) { + spin_lock_bh(&ctx->decrypt_compl_lock); + ctx->async_notify = true; + pending = atomic_read(&ctx->decrypt_pending); + spin_unlock_bh(&ctx->decrypt_compl_lock); + if (pending) { err = crypto_wait_req(-EINPROGRESS, &ctx->async_wait); if (err) { /* one of async decrypt failed */ @@ -1907,6 +1922,10 @@ recv_end: } else { reinit_completion(&ctx->async_wait.completion); } + + /* There can be no concurrent accesses, since we have no + * pending decrypt operations + */ WRITE_ONCE(ctx->async_notify, false); /* Drain records from the rx_list & copy if required */ @@ -2293,6 +2312,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx) if (tx) { crypto_init_wait(&sw_ctx_tx->async_wait); + spin_lock_init(&sw_ctx_tx->encrypt_compl_lock); crypto_info = &ctx->crypto_send.info; cctx = &ctx->tx; aead = &sw_ctx_tx->aead_send; @@ -2301,6 +2321,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx) sw_ctx_tx->tx_work.sk = sk; } else { crypto_init_wait(&sw_ctx_rx->async_wait); + spin_lock_init(&sw_ctx_rx->decrypt_compl_lock); crypto_info = &ctx->crypto_recv.info; cctx = &ctx->rx; skb_queue_head_init(&sw_ctx_rx->rx_list); diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index a5f28708e0e7..626bf9044418 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -1408,7 +1408,7 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags, /* Wait for children sockets to appear; these are the new sockets * created upon connection establishment. */ - timeout = sock_sndtimeo(listener, flags & O_NONBLOCK); + timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK); prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE); while ((connected = vsock_dequeue_accept(listener)) == NULL && diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index 69efc891885f..0edda1edf988 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -1132,6 +1132,14 @@ void virtio_transport_recv_pkt(struct virtio_transport *t, lock_sock(sk); + /* Check if sk has been released before lock_sock */ + if (sk->sk_shutdown == SHUTDOWN_MASK) { + (void)virtio_transport_reset_no_sock(t, pkt); + release_sock(sk); + sock_put(sk); + goto free_pkt; + } + /* Update CID in case it has changed after a transport reset event */ vsk->local_addr.svm_cid = dst.svm_cid; diff --git a/net/wireless/Kconfig b/net/wireless/Kconfig index 63cf7131f601..813e93644ae7 100644 --- a/net/wireless/Kconfig +++ b/net/wireless/Kconfig @@ -181,8 +181,8 @@ config CFG80211_CRDA_SUPPORT default y help You should enable this option unless you know for sure you have no - need for it, for example when using internal regdb (above) or the - database loaded as a firmware file. + need for it, for example when using the regulatory database loaded as + a firmware file. If unsure, say Y. diff --git a/net/wireless/chan.c b/net/wireless/chan.c index e111c08daa0e..cddf92c5d09e 100644 --- a/net/wireless/chan.c +++ b/net/wireless/chan.c @@ -6,7 +6,7 @@ * * Copyright 2009 Johannes Berg <johannes@sipsolutions.net> * Copyright 2013-2014 Intel Mobile Communications GmbH - * Copyright 2018 Intel Corporation + * Copyright 2018-2020 Intel Corporation */ #include <linux/export.h> @@ -919,7 +919,8 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, width = 10; break; case NL80211_CHAN_WIDTH_20: - if (!ht_cap->ht_supported) + if (!ht_cap->ht_supported && + chandef->chan->band != NL80211_BAND_6GHZ) return false; /* fall through */ case NL80211_CHAN_WIDTH_20_NOHT: @@ -928,6 +929,8 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, break; case NL80211_CHAN_WIDTH_40: width = 40; + if (chandef->chan->band == NL80211_BAND_6GHZ) + break; if (!ht_cap->ht_supported) return false; if (!(ht_cap->cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40) || @@ -942,24 +945,29 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, break; case NL80211_CHAN_WIDTH_80P80: cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; - if (cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) + if (chandef->chan->band != NL80211_BAND_6GHZ && + cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) return false; /* fall through */ case NL80211_CHAN_WIDTH_80: - if (!vht_cap->vht_supported) - return false; prohibited_flags |= IEEE80211_CHAN_NO_80MHZ; width = 80; + if (chandef->chan->band == NL80211_BAND_6GHZ) + break; + if (!vht_cap->vht_supported) + return false; break; case NL80211_CHAN_WIDTH_160: + prohibited_flags |= IEEE80211_CHAN_NO_160MHZ; + width = 160; + if (chandef->chan->band == NL80211_BAND_6GHZ) + break; if (!vht_cap->vht_supported) return false; cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; if (cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ && cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) return false; - prohibited_flags |= IEEE80211_CHAN_NO_160MHZ; - width = 160; break; default: WARN_ON_ONCE(1); diff --git a/net/wireless/core.c b/net/wireless/core.c index b795f363d004..f0226ae9561c 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -5,7 +5,7 @@ * Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net> * Copyright 2013-2014 Intel Mobile Communications GmbH * Copyright 2015-2017 Intel Deutschland GmbH - * Copyright (C) 2018-2019 Intel Corporation + * Copyright (C) 2018-2020 Intel Corporation */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -142,7 +142,7 @@ int cfg80211_dev_rename(struct cfg80211_registered_device *rdev, if (result) return result; - if (rdev->wiphy.debugfsdir) + if (!IS_ERR_OR_NULL(rdev->wiphy.debugfsdir)) debugfs_rename(rdev->wiphy.debugfsdir->d_parent, rdev->wiphy.debugfsdir, rdev->wiphy.debugfsdir->d_parent, newname); @@ -791,6 +791,7 @@ int wiphy_register(struct wiphy *wiphy) /* sanity check supported bands/channels */ for (band = 0; band < NUM_NL80211_BANDS; band++) { u16 types = 0; + bool have_he = false; sband = wiphy->bands[band]; if (!sband) @@ -807,6 +808,11 @@ int wiphy_register(struct wiphy *wiphy) !sband->n_bitrates)) return -EINVAL; + if (WARN_ON(band == NL80211_BAND_6GHZ && + (sband->ht_cap.ht_supported || + sband->vht_cap.vht_supported))) + return -EINVAL; + /* * Since cfg80211_disable_40mhz_24ghz is global, we can * modify the sband's ht data even if the driver uses a @@ -854,8 +860,17 @@ int wiphy_register(struct wiphy *wiphy) return -EINVAL; types |= iftd->types_mask; + + if (i == 0) + have_he = iftd->he_cap.has_he; + else + have_he = have_he && + iftd->he_cap.has_he; } + if (WARN_ON(!have_he && band == NL80211_BAND_6GHZ)) + return -EINVAL; + have_band = true; } diff --git a/net/wireless/core.h b/net/wireless/core.h index 639d41896573..e0e5b3ee9699 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -286,7 +286,7 @@ struct cfg80211_cqm_config { u32 rssi_hyst; s32 last_rssi_event_value; int n_rssi_thresholds; - s32 rssi_thresholds[0]; + s32 rssi_thresholds[]; }; void cfg80211_destroy_ifaces(struct cfg80211_registered_device *rdev); diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c index 409497a3527d..189334314cba 100644 --- a/net/wireless/mlme.c +++ b/net/wireless/mlme.c @@ -729,8 +729,8 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev, return rdev_mgmt_tx(rdev, wdev, params, cookie); } -bool cfg80211_rx_mgmt(struct wireless_dev *wdev, int freq, int sig_dbm, - const u8 *buf, size_t len, u32 flags) +bool cfg80211_rx_mgmt_khz(struct wireless_dev *wdev, int freq, int sig_dbm, + const u8 *buf, size_t len, u32 flags) { struct wiphy *wiphy = wdev->wiphy; struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); @@ -785,7 +785,7 @@ bool cfg80211_rx_mgmt(struct wireless_dev *wdev, int freq, int sig_dbm, trace_cfg80211_return_bool(result); return result; } -EXPORT_SYMBOL(cfg80211_rx_mgmt); +EXPORT_SYMBOL(cfg80211_rx_mgmt_khz); void cfg80211_sched_dfs_chan_update(struct cfg80211_registered_device *rdev) { diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index fa66d5b6f557..263ae395ad44 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -329,6 +329,15 @@ he_bss_color_policy[NL80211_HE_BSS_COLOR_ATTR_MAX + 1] = { [NL80211_HE_BSS_COLOR_ATTR_PARTIAL] = { .type = NLA_FLAG }, }; +static const struct nla_policy nl80211_txattr_policy[NL80211_TXRATE_MAX + 1] = { + [NL80211_TXRATE_LEGACY] = { .type = NLA_BINARY, + .len = NL80211_MAX_SUPP_RATES }, + [NL80211_TXRATE_HT] = { .type = NLA_BINARY, + .len = NL80211_MAX_SUPP_HT_RATES }, + [NL80211_TXRATE_VHT] = NLA_POLICY_EXACT_LEN_WARN(sizeof(struct nl80211_txrate_vht)), + [NL80211_TXRATE_GI] = { .type = NLA_U8 }, +}; + static const struct nla_policy nl80211_tid_config_attr_policy[NL80211_TID_CONFIG_ATTR_MAX + 1] = { [NL80211_TID_CONFIG_ATTR_VIF_SUPP] = { .type = NLA_U64 }, @@ -343,6 +352,12 @@ nl80211_tid_config_attr_policy[NL80211_TID_CONFIG_ATTR_MAX + 1] = { NLA_POLICY_MAX(NLA_U8, NL80211_TID_CONFIG_DISABLE), [NL80211_TID_CONFIG_ATTR_RTSCTS_CTRL] = NLA_POLICY_MAX(NLA_U8, NL80211_TID_CONFIG_DISABLE), + [NL80211_TID_CONFIG_ATTR_AMSDU_CTRL] = + NLA_POLICY_MAX(NLA_U8, NL80211_TID_CONFIG_DISABLE), + [NL80211_TID_CONFIG_ATTR_TX_RATE_TYPE] = + NLA_POLICY_MAX(NLA_U8, NL80211_TX_RATE_FIXED), + [NL80211_TID_CONFIG_ATTR_TX_RATE] = + NLA_POLICY_NESTED(nl80211_txattr_policy), }; static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = { @@ -363,6 +378,7 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = { [NL80211_ATTR_CHANNEL_WIDTH] = { .type = NLA_U32 }, [NL80211_ATTR_CENTER_FREQ1] = { .type = NLA_U32 }, + [NL80211_ATTR_CENTER_FREQ1_OFFSET] = NLA_POLICY_RANGE(NLA_U32, 0, 999), [NL80211_ATTR_CENTER_FREQ2] = { .type = NLA_U32 }, [NL80211_ATTR_WIPHY_RETRY_SHORT] = NLA_POLICY_MIN(NLA_U8, 1), @@ -636,6 +652,12 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = { [NL80211_ATTR_PMK_LIFETIME] = NLA_POLICY_MIN(NLA_U32, 1), [NL80211_ATTR_PMK_REAUTH_THRESHOLD] = NLA_POLICY_RANGE(NLA_U8, 1, 100), [NL80211_ATTR_RECEIVE_MULTICAST] = { .type = NLA_FLAG }, + [NL80211_ATTR_WIPHY_FREQ_OFFSET] = NLA_POLICY_RANGE(NLA_U32, 0, 999), + [NL80211_ATTR_SCAN_FREQ_KHZ] = { .type = NLA_NESTED }, + [NL80211_ATTR_HE_6GHZ_CAPABILITY] = { + .type = NLA_EXACT_LEN, + .len = sizeof(struct ieee80211_he_6ghz_capa), + }, }; /* policy for the key attributes */ @@ -708,9 +730,16 @@ nl80211_coalesce_policy[NUM_NL80211_ATTR_COALESCE_RULE] = { /* policy for GTK rekey offload attributes */ static const struct nla_policy nl80211_rekey_policy[NUM_NL80211_REKEY_DATA] = { - [NL80211_REKEY_DATA_KEK] = NLA_POLICY_EXACT_LEN_WARN(NL80211_KEK_LEN), - [NL80211_REKEY_DATA_KCK] = NLA_POLICY_EXACT_LEN_WARN(NL80211_KCK_LEN), + [NL80211_REKEY_DATA_KEK] = { + .type = NLA_BINARY, + .len = NL80211_KEK_EXT_LEN + }, + [NL80211_REKEY_DATA_KCK] = { + .type = NLA_BINARY, + .len = NL80211_KCK_EXT_LEN + }, [NL80211_REKEY_DATA_REPLAY_CTR] = NLA_POLICY_EXACT_LEN_WARN(NL80211_REPLAY_CTR_LEN), + [NL80211_REKEY_DATA_AKM] = { .type = NLA_U32 }, }; static const struct nla_policy @@ -902,6 +931,9 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy, chan->center_freq)) goto nla_put_failure; + if (nla_put_u32(msg, NL80211_FREQUENCY_ATTR_OFFSET, chan->freq_offset)) + goto nla_put_failure; + if ((chan->flags & IEEE80211_CHAN_DISABLED) && nla_put_flag(msg, NL80211_FREQUENCY_ATTR_DISABLED)) goto nla_put_failure; @@ -1307,13 +1339,11 @@ static int nl80211_key_allowed(struct wireless_dev *wdev) } static struct ieee80211_channel *nl80211_get_valid_chan(struct wiphy *wiphy, - struct nlattr *tb) + u32 freq) { struct ieee80211_channel *chan; - if (tb == NULL) - return NULL; - chan = ieee80211_get_channel(wiphy, nla_get_u32(tb)); + chan = ieee80211_get_channel_khz(wiphy, freq); if (!chan || chan->flags & IEEE80211_CHAN_DISABLED) return NULL; return chan; @@ -1539,6 +1569,7 @@ static int nl80211_send_coalesce(struct sk_buff *msg, static int nl80211_send_iftype_data(struct sk_buff *msg, + const struct ieee80211_supported_band *sband, const struct ieee80211_sband_iftype_data *iftdata) { const struct ieee80211_sta_he_cap *he_cap = &iftdata->he_cap; @@ -1562,6 +1593,12 @@ nl80211_send_iftype_data(struct sk_buff *msg, return -ENOBUFS; } + if (sband->band == NL80211_BAND_6GHZ && + nla_put(msg, NL80211_BAND_IFTYPE_ATTR_HE_6GHZ_CAPA, + sizeof(iftdata->he_6ghz_capa), + &iftdata->he_6ghz_capa)) + return -ENOBUFS; + return 0; } @@ -1610,7 +1647,7 @@ static int nl80211_send_band_rateinfo(struct sk_buff *msg, if (!iftdata) return -ENOBUFS; - err = nl80211_send_iftype_data(msg, + err = nl80211_send_iftype_data(msg, sband, &sband->iftype_data[i]); if (err) return err; @@ -2768,13 +2805,17 @@ int nl80211_parse_chandef(struct cfg80211_registered_device *rdev, if (!attrs[NL80211_ATTR_WIPHY_FREQ]) return -EINVAL; - control_freq = nla_get_u32(attrs[NL80211_ATTR_WIPHY_FREQ]); + control_freq = MHZ_TO_KHZ( + nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ])); + if (info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]) + control_freq += + nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]); memset(chandef, 0, sizeof(*chandef)); - - chandef->chan = ieee80211_get_channel(&rdev->wiphy, control_freq); + chandef->chan = ieee80211_get_channel_khz(&rdev->wiphy, control_freq); chandef->width = NL80211_CHAN_WIDTH_20_NOHT; - chandef->center_freq1 = control_freq; + chandef->center_freq1 = KHZ_TO_MHZ(control_freq); + chandef->freq1_offset = control_freq % 1000; chandef->center_freq2 = 0; /* Primary channel not allowed */ @@ -2822,9 +2863,15 @@ int nl80211_parse_chandef(struct cfg80211_registered_device *rdev, } else if (attrs[NL80211_ATTR_CHANNEL_WIDTH]) { chandef->width = nla_get_u32(attrs[NL80211_ATTR_CHANNEL_WIDTH]); - if (attrs[NL80211_ATTR_CENTER_FREQ1]) + if (attrs[NL80211_ATTR_CENTER_FREQ1]) { chandef->center_freq1 = nla_get_u32(attrs[NL80211_ATTR_CENTER_FREQ1]); + if (attrs[NL80211_ATTR_CENTER_FREQ1_OFFSET]) + chandef->freq1_offset = nla_get_u32( + attrs[NL80211_ATTR_CENTER_FREQ1_OFFSET]); + else + chandef->freq1_offset = 0; + } if (attrs[NL80211_ATTR_CENTER_FREQ2]) chandef->center_freq2 = nla_get_u32(attrs[NL80211_ATTR_CENTER_FREQ2]); @@ -3257,6 +3304,9 @@ static int nl80211_send_chandef(struct sk_buff *msg, if (nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, chandef->chan->center_freq)) return -ENOBUFS; + if (nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ_OFFSET, + chandef->chan->freq_offset)) + return -ENOBUFS; switch (chandef->width) { case NL80211_CHAN_WIDTH_20_NOHT: case NL80211_CHAN_WIDTH_20: @@ -4369,16 +4419,9 @@ static bool vht_set_mcs_mask(struct ieee80211_supported_band *sband, return true; } -static const struct nla_policy nl80211_txattr_policy[NL80211_TXRATE_MAX + 1] = { - [NL80211_TXRATE_LEGACY] = { .type = NLA_BINARY, - .len = NL80211_MAX_SUPP_RATES }, - [NL80211_TXRATE_HT] = { .type = NLA_BINARY, - .len = NL80211_MAX_SUPP_HT_RATES }, - [NL80211_TXRATE_VHT] = NLA_POLICY_EXACT_LEN_WARN(sizeof(struct nl80211_txrate_vht)), - [NL80211_TXRATE_GI] = { .type = NLA_U8 }, -}; - static int nl80211_parse_tx_bitrate_mask(struct genl_info *info, + struct nlattr *attrs[], + enum nl80211_attrs attr, struct cfg80211_bitrate_mask *mask) { struct nlattr *tb[NL80211_TXRATE_MAX + 1]; @@ -4409,14 +4452,14 @@ static int nl80211_parse_tx_bitrate_mask(struct genl_info *info, } /* if no rates are given set it back to the defaults */ - if (!info->attrs[NL80211_ATTR_TX_RATES]) + if (!attrs[attr]) goto out; /* The nested attribute uses enum nl80211_band as the index. This maps * directly to the enum nl80211_band values used in cfg80211. */ BUILD_BUG_ON(NL80211_MAX_SUPP_HT_RATES > IEEE80211_HT_MCS_MASK_LEN * 8); - nla_for_each_nested(tx_rates, info->attrs[NL80211_ATTR_TX_RATES], rem) { + nla_for_each_nested(tx_rates, attrs[attr], rem) { enum nl80211_band band = nla_type(tx_rates); int err; @@ -4921,7 +4964,9 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info) return -EINVAL; if (info->attrs[NL80211_ATTR_TX_RATES]) { - err = nl80211_parse_tx_bitrate_mask(info, ¶ms.beacon_rate); + err = nl80211_parse_tx_bitrate_mask(info, info->attrs, + NL80211_ATTR_TX_RATES, + ¶ms.beacon_rate); if (err) return err; @@ -5962,6 +6007,10 @@ static int nl80211_set_station(struct sk_buff *skb, struct genl_info *info) nla_get_u8(info->attrs[NL80211_ATTR_OPMODE_NOTIF]); } + if (info->attrs[NL80211_ATTR_HE_6GHZ_CAPABILITY]) + params.he_6ghz_capa = + nla_data(info->attrs[NL80211_ATTR_HE_CAPABILITY]); + if (info->attrs[NL80211_ATTR_AIRTIME_WEIGHT]) params.airtime_weight = nla_get_u16(info->attrs[NL80211_ATTR_AIRTIME_WEIGHT]); @@ -6096,6 +6145,10 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info) return -EINVAL; } + if (info->attrs[NL80211_ATTR_HE_6GHZ_CAPABILITY]) + params.he_6ghz_capa = + nla_data(info->attrs[NL80211_ATTR_HE_6GHZ_CAPABILITY]); + if (info->attrs[NL80211_ATTR_OPMODE_NOTIF]) { params.opmode_notif_used = true; params.opmode_notif = @@ -6140,10 +6193,14 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info) params.vht_capa = NULL; /* HE requires WME */ - if (params.he_capa_len) + if (params.he_capa_len || params.he_6ghz_capa) return -EINVAL; } + /* Ensure that HT/VHT capabilities are not set for 6 GHz HE STA */ + if (params.he_6ghz_capa && (params.ht_capa || params.vht_capa)) + return -EINVAL; + /* When you run into this, adjust the code below for the new flag */ BUILD_BUG_ON(NL80211_STA_FLAG_MAX != 7); @@ -7701,6 +7758,8 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info) struct cfg80211_registered_device *rdev = info->user_ptr[0]; struct wireless_dev *wdev = info->user_ptr[1]; struct cfg80211_scan_request *request; + struct nlattr *scan_freqs = NULL; + bool scan_freqs_khz = false; struct nlattr *attr; struct wiphy *wiphy; int err, tmp, n_ssids = 0, n_channels, i; @@ -7719,9 +7778,17 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info) goto unlock; } - if (info->attrs[NL80211_ATTR_SCAN_FREQUENCIES]) { - n_channels = validate_scan_freqs( - info->attrs[NL80211_ATTR_SCAN_FREQUENCIES]); + if (info->attrs[NL80211_ATTR_SCAN_FREQ_KHZ]) { + if (!wiphy_ext_feature_isset(wiphy, + NL80211_EXT_FEATURE_SCAN_FREQ_KHZ)) + return -EOPNOTSUPP; + scan_freqs = info->attrs[NL80211_ATTR_SCAN_FREQ_KHZ]; + scan_freqs_khz = true; + } else if (info->attrs[NL80211_ATTR_SCAN_FREQUENCIES]) + scan_freqs = info->attrs[NL80211_ATTR_SCAN_FREQUENCIES]; + + if (scan_freqs) { + n_channels = validate_scan_freqs(scan_freqs); if (!n_channels) { err = -EINVAL; goto unlock; @@ -7769,13 +7836,16 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info) } i = 0; - if (info->attrs[NL80211_ATTR_SCAN_FREQUENCIES]) { + if (scan_freqs) { /* user specified, bail out if channel not found */ - nla_for_each_nested(attr, info->attrs[NL80211_ATTR_SCAN_FREQUENCIES], tmp) { + nla_for_each_nested(attr, scan_freqs, tmp) { struct ieee80211_channel *chan; + int freq = nla_get_u32(attr); - chan = ieee80211_get_channel(wiphy, nla_get_u32(attr)); + if (!scan_freqs_khz) + freq = MHZ_TO_KHZ(freq); + chan = ieee80211_get_channel_khz(wiphy, freq); if (!chan) { err = -EINVAL; goto out_free; @@ -8871,6 +8941,8 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb, goto nla_put_failure; if (nla_put_u16(msg, NL80211_BSS_CAPABILITY, res->capability) || nla_put_u32(msg, NL80211_BSS_FREQUENCY, res->channel->center_freq) || + nla_put_u32(msg, NL80211_BSS_FREQUENCY_OFFSET, + res->channel->freq_offset) || nla_put_u32(msg, NL80211_BSS_CHAN_WIDTH, res->scan_width) || nla_put_u32(msg, NL80211_BSS_SEEN_MS_AGO, jiffies_to_msecs(jiffies - intbss->ts))) @@ -9139,6 +9211,7 @@ static int nl80211_authenticate(struct sk_buff *skb, struct genl_info *info) enum nl80211_auth_type auth_type; struct key_parse key; bool local_state_change; + u32 freq; if (!info->attrs[NL80211_ATTR_MAC]) return -EINVAL; @@ -9195,8 +9268,12 @@ static int nl80211_authenticate(struct sk_buff *skb, struct genl_info *info) return -EOPNOTSUPP; bssid = nla_data(info->attrs[NL80211_ATTR_MAC]); - chan = nl80211_get_valid_chan(&rdev->wiphy, - info->attrs[NL80211_ATTR_WIPHY_FREQ]); + freq = MHZ_TO_KHZ(nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ])); + if (info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]) + freq += + nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]); + + chan = nl80211_get_valid_chan(&rdev->wiphy, freq); if (!chan) return -EINVAL; @@ -9386,6 +9463,7 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info) struct cfg80211_assoc_request req = {}; const u8 *bssid, *ssid; int err, ssid_len = 0; + u32 freq; if (dev->ieee80211_ptr->conn_owner_nlportid && dev->ieee80211_ptr->conn_owner_nlportid != info->snd_portid) @@ -9405,8 +9483,11 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info) bssid = nla_data(info->attrs[NL80211_ATTR_MAC]); - chan = nl80211_get_valid_chan(&rdev->wiphy, - info->attrs[NL80211_ATTR_WIPHY_FREQ]); + freq = MHZ_TO_KHZ(nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ])); + if (info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]) + freq += + nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]); + chan = nl80211_get_valid_chan(&rdev->wiphy, freq); if (!chan) return -EINVAL; @@ -10086,6 +10167,7 @@ static int nl80211_connect(struct sk_buff *skb, struct genl_info *info) struct cfg80211_connect_params connect; struct wiphy *wiphy; struct cfg80211_cached_keys *connkeys = NULL; + u32 freq = 0; int err; memset(&connect, 0, sizeof(connect)); @@ -10156,14 +10238,21 @@ static int nl80211_connect(struct sk_buff *skb, struct genl_info *info) connect.prev_bssid = nla_data(info->attrs[NL80211_ATTR_PREV_BSSID]); - if (info->attrs[NL80211_ATTR_WIPHY_FREQ]) { - connect.channel = nl80211_get_valid_chan( - wiphy, info->attrs[NL80211_ATTR_WIPHY_FREQ]); + if (info->attrs[NL80211_ATTR_WIPHY_FREQ]) + freq = MHZ_TO_KHZ(nla_get_u32( + info->attrs[NL80211_ATTR_WIPHY_FREQ])); + if (info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]) + freq += + nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ_OFFSET]); + + if (freq) { + connect.channel = nl80211_get_valid_chan(wiphy, freq); if (!connect.channel) return -EINVAL; } else if (info->attrs[NL80211_ATTR_WIPHY_FREQ_HINT]) { - connect.channel_hint = nl80211_get_valid_chan( - wiphy, info->attrs[NL80211_ATTR_WIPHY_FREQ_HINT]); + freq = nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ_HINT]); + freq = MHZ_TO_KHZ(freq); + connect.channel_hint = nl80211_get_valid_chan(wiphy, freq); if (!connect.channel_hint) return -EINVAL; } @@ -10702,7 +10791,8 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb, if (!rdev->ops->set_bitrate_mask) return -EOPNOTSUPP; - err = nl80211_parse_tx_bitrate_mask(info, &mask); + err = nl80211_parse_tx_bitrate_mask(info, info->attrs, + NL80211_ATTR_TX_RATES, &mask); if (err) return err; @@ -11308,7 +11398,9 @@ static int nl80211_join_mesh(struct sk_buff *skb, struct genl_info *info) } if (info->attrs[NL80211_ATTR_TX_RATES]) { - err = nl80211_parse_tx_bitrate_mask(info, &setup.beacon_rate); + err = nl80211_parse_tx_bitrate_mask(info, info->attrs, + NL80211_ATTR_TX_RATES, + &setup.beacon_rate); if (err) return err; @@ -12262,14 +12354,22 @@ static int nl80211_set_rekey_data(struct sk_buff *skb, struct genl_info *info) return -EINVAL; if (nla_len(tb[NL80211_REKEY_DATA_REPLAY_CTR]) != NL80211_REPLAY_CTR_LEN) return -ERANGE; - if (nla_len(tb[NL80211_REKEY_DATA_KEK]) != NL80211_KEK_LEN) + if (nla_len(tb[NL80211_REKEY_DATA_KEK]) != NL80211_KEK_LEN && + !(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK && + nla_len(tb[NL80211_REKEY_DATA_KEK]) == NL80211_KEK_EXT_LEN)) return -ERANGE; - if (nla_len(tb[NL80211_REKEY_DATA_KCK]) != NL80211_KCK_LEN) + if (nla_len(tb[NL80211_REKEY_DATA_KCK]) != NL80211_KCK_LEN && + !(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK && + nla_len(tb[NL80211_REKEY_DATA_KEK]) == NL80211_KCK_EXT_LEN)) return -ERANGE; rekey_data.kek = nla_data(tb[NL80211_REKEY_DATA_KEK]); rekey_data.kck = nla_data(tb[NL80211_REKEY_DATA_KCK]); rekey_data.replay_ctr = nla_data(tb[NL80211_REKEY_DATA_REPLAY_CTR]); + rekey_data.kek_len = nla_len(tb[NL80211_REKEY_DATA_KEK]); + rekey_data.kck_len = nla_len(tb[NL80211_REKEY_DATA_KCK]); + if (tb[NL80211_REKEY_DATA_AKM]) + rekey_data.akm = nla_get_u32(tb[NL80211_REKEY_DATA_AKM]); wdev_lock(wdev); if (!wdev->current_bss) { @@ -13815,6 +13915,7 @@ static int nl80211_external_auth(struct sk_buff *skb, struct genl_info *info) static int nl80211_tx_control_port(struct sk_buff *skb, struct genl_info *info) { + bool dont_wait_for_ack = info->attrs[NL80211_ATTR_DONT_WAIT_FOR_ACK]; struct cfg80211_registered_device *rdev = info->user_ptr[0]; struct net_device *dev = info->user_ptr[1]; struct wireless_dev *wdev = dev->ieee80211_ptr; @@ -13823,6 +13924,7 @@ static int nl80211_tx_control_port(struct sk_buff *skb, struct genl_info *info) u8 *dest; u16 proto; bool noencrypt; + u64 cookie = 0; int err; if (!wiphy_ext_feature_isset(&rdev->wiphy, @@ -13867,9 +13969,12 @@ static int nl80211_tx_control_port(struct sk_buff *skb, struct genl_info *info) noencrypt = nla_get_flag(info->attrs[NL80211_ATTR_CONTROL_PORT_NO_ENCRYPT]); - return rdev_tx_control_port(rdev, dev, buf, len, - dest, cpu_to_be16(proto), noencrypt); - + err = rdev_tx_control_port(rdev, dev, buf, len, + dest, cpu_to_be16(proto), noencrypt, + dont_wait_for_ack ? NULL : &cookie); + if (!err && !dont_wait_for_ack) + nl_set_extack_cookie_u64(info->extack, cookie); + return err; out: wdev_unlock(wdev); return err; @@ -14034,10 +14139,7 @@ static int parse_tid_conf(struct cfg80211_registered_device *rdev, if (rdev->ops->reset_tid_config) { err = rdev_reset_tid_config(rdev, dev, peer, tid_conf->tids); - /* If peer is there no other configuration will be - * allowed - */ - if (err || peer) + if (err) return err; } else { return -EINVAL; @@ -14080,6 +14182,29 @@ static int parse_tid_conf(struct cfg80211_registered_device *rdev, nla_get_u8(attrs[NL80211_TID_CONFIG_ATTR_RTSCTS_CTRL]); } + if (attrs[NL80211_TID_CONFIG_ATTR_AMSDU_CTRL]) { + tid_conf->mask |= BIT(NL80211_TID_CONFIG_ATTR_AMSDU_CTRL); + tid_conf->amsdu = + nla_get_u8(attrs[NL80211_TID_CONFIG_ATTR_AMSDU_CTRL]); + } + + if (attrs[NL80211_TID_CONFIG_ATTR_TX_RATE_TYPE]) { + u32 idx = NL80211_TID_CONFIG_ATTR_TX_RATE_TYPE, attr; + + tid_conf->txrate_type = nla_get_u8(attrs[idx]); + + if (tid_conf->txrate_type != NL80211_TX_RATE_AUTOMATIC) { + attr = NL80211_TID_CONFIG_ATTR_TX_RATE; + err = nl80211_parse_tx_bitrate_mask(info, attrs, attr, + &tid_conf->txrate_mask); + if (err) + return err; + + tid_conf->mask |= BIT(NL80211_TID_CONFIG_ATTR_TX_RATE); + } + tid_conf->mask |= BIT(NL80211_TID_CONFIG_ATTR_TX_RATE_TYPE); + } + if (peer) mask = rdev->wiphy.tid_config_support.peer; else @@ -15191,14 +15316,27 @@ static int nl80211_add_scan_req(struct sk_buff *msg, } nla_nest_end(msg, nest); - nest = nla_nest_start_noflag(msg, NL80211_ATTR_SCAN_FREQUENCIES); - if (!nest) - goto nla_put_failure; - for (i = 0; i < req->n_channels; i++) { - if (nla_put_u32(msg, i, req->channels[i]->center_freq)) + if (req->flags & NL80211_SCAN_FLAG_FREQ_KHZ) { + nest = nla_nest_start(msg, NL80211_ATTR_SCAN_FREQ_KHZ); + if (!nest) goto nla_put_failure; + for (i = 0; i < req->n_channels; i++) { + if (nla_put_u32(msg, i, + ieee80211_channel_to_khz(req->channels[i]))) + goto nla_put_failure; + } + nla_nest_end(msg, nest); + } else { + nest = nla_nest_start_noflag(msg, + NL80211_ATTR_SCAN_FREQUENCIES); + if (!nest) + goto nla_put_failure; + for (i = 0; i < req->n_channels; i++) { + if (nla_put_u32(msg, i, req->channels[i]->center_freq)) + goto nla_put_failure; + } + nla_nest_end(msg, nest); } - nla_nest_end(msg, nest); if (req->ie && nla_put(msg, NL80211_ATTR_IE, req->ie_len, req->ie)) @@ -16209,7 +16347,8 @@ int nl80211_send_mgmt(struct cfg80211_registered_device *rdev, netdev->ifindex)) || nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev), NL80211_ATTR_PAD) || - nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, freq) || + nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, KHZ_TO_MHZ(freq)) || + nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ_OFFSET, freq % 1000) || (sig_dbm && nla_put_u32(msg, NL80211_ATTR_RX_SIGNAL_DBM, sig_dbm)) || nla_put(msg, NL80211_ATTR_FRAME, len, buf) || @@ -16226,8 +16365,9 @@ int nl80211_send_mgmt(struct cfg80211_registered_device *rdev, return -ENOBUFS; } -void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie, - const u8 *buf, size_t len, bool ack, gfp_t gfp) +static void nl80211_frame_tx_status(struct wireless_dev *wdev, u64 cookie, + const u8 *buf, size_t len, bool ack, + gfp_t gfp, enum nl80211_commands command) { struct wiphy *wiphy = wdev->wiphy; struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); @@ -16235,13 +16375,16 @@ void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie, struct sk_buff *msg; void *hdr; - trace_cfg80211_mgmt_tx_status(wdev, cookie, ack); + if (command == NL80211_CMD_FRAME_TX_STATUS) + trace_cfg80211_mgmt_tx_status(wdev, cookie, ack); + else + trace_cfg80211_control_port_tx_status(wdev, cookie, ack); msg = nlmsg_new(100 + len, gfp); if (!msg) return; - hdr = nl80211hdr_put(msg, 0, 0, 0, NL80211_CMD_FRAME_TX_STATUS); + hdr = nl80211hdr_put(msg, 0, 0, 0, command); if (!hdr) { nlmsg_free(msg); return; @@ -16264,9 +16407,25 @@ void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie, NL80211_MCGRP_MLME, gfp); return; - nla_put_failure: +nla_put_failure: nlmsg_free(msg); } + +void cfg80211_control_port_tx_status(struct wireless_dev *wdev, u64 cookie, + const u8 *buf, size_t len, bool ack, + gfp_t gfp) +{ + nl80211_frame_tx_status(wdev, cookie, buf, len, ack, gfp, + NL80211_CMD_CONTROL_PORT_FRAME_TX_STATUS); +} +EXPORT_SYMBOL(cfg80211_control_port_tx_status); + +void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie, + const u8 *buf, size_t len, bool ack, gfp_t gfp) +{ + nl80211_frame_tx_status(wdev, cookie, buf, len, ack, gfp, + NL80211_CMD_FRAME_TX_STATUS); +} EXPORT_SYMBOL(cfg80211_mgmt_tx_status); static int __nl80211_rx_control_port(struct net_device *dev, @@ -16835,9 +16994,8 @@ void cfg80211_probe_status(struct net_device *dev, const u8 *addr, } EXPORT_SYMBOL(cfg80211_probe_status); -void cfg80211_report_obss_beacon(struct wiphy *wiphy, - const u8 *frame, size_t len, - int freq, int sig_dbm) +void cfg80211_report_obss_beacon_khz(struct wiphy *wiphy, const u8 *frame, + size_t len, int freq, int sig_dbm) { struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); struct sk_buff *msg; @@ -16860,7 +17018,10 @@ void cfg80211_report_obss_beacon(struct wiphy *wiphy, if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || (freq && - nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, freq)) || + (nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, + KHZ_TO_MHZ(freq)) || + nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ_OFFSET, + freq % 1000))) || (sig_dbm && nla_put_u32(msg, NL80211_ATTR_RX_SIGNAL_DBM, sig_dbm)) || nla_put(msg, NL80211_ATTR_FRAME, len, frame)) @@ -16877,7 +17038,7 @@ void cfg80211_report_obss_beacon(struct wiphy *wiphy, spin_unlock_bh(&rdev->beacon_registrations_lock); nlmsg_free(msg); } -EXPORT_SYMBOL(cfg80211_report_obss_beacon); +EXPORT_SYMBOL(cfg80211_report_obss_beacon_khz); #ifdef CONFIG_PM static int cfg80211_net_detect_results(struct sk_buff *msg, diff --git a/net/wireless/rdev-ops.h b/net/wireless/rdev-ops.h index df5142e86c4f..950d57494168 100644 --- a/net/wireless/rdev-ops.h +++ b/net/wireless/rdev-ops.h @@ -748,14 +748,17 @@ static inline int rdev_tx_control_port(struct cfg80211_registered_device *rdev, struct net_device *dev, const void *buf, size_t len, const u8 *dest, __be16 proto, - const bool noencrypt) + const bool noencrypt, u64 *cookie) { int ret; trace_rdev_tx_control_port(&rdev->wiphy, dev, buf, len, dest, proto, noencrypt); ret = rdev->ops->tx_control_port(&rdev->wiphy, dev, buf, len, - dest, proto, noencrypt); - trace_rdev_return_int(&rdev->wiphy, ret); + dest, proto, noencrypt, cookie); + if (cookie) + trace_rdev_return_int_cookie(&rdev->wiphy, ret, *cookie); + else + trace_rdev_return_int(&rdev->wiphy, ret); return ret; } diff --git a/net/wireless/sme.c b/net/wireless/sme.c index 3554c0d951f4..15595cf401de 100644 --- a/net/wireless/sme.c +++ b/net/wireless/sme.c @@ -5,7 +5,7 @@ * (for nl80211's connect() and wext) * * Copyright 2009 Johannes Berg <johannes@sipsolutions.net> - * Copyright (C) 2009 Intel Corporation. All rights reserved. + * Copyright (C) 2009, 2020 Intel Corporation. All rights reserved. * Copyright 2017 Intel Deutschland GmbH */ @@ -1118,7 +1118,10 @@ void __cfg80211_disconnected(struct net_device *dev, const u8 *ie, if (wiphy_ext_feature_isset( wdev->wiphy, - NL80211_EXT_FEATURE_BEACON_PROTECTION)) + NL80211_EXT_FEATURE_BEACON_PROTECTION) || + wiphy_ext_feature_isset( + wdev->wiphy, + NL80211_EXT_FEATURE_BEACON_PROTECTION_CLIENT)) max_key_idx = 7; for (i = 0; i <= max_key_idx; i++) rdev_del_key(rdev, dev, i, false, NULL); diff --git a/net/wireless/trace.h b/net/wireless/trace.h index 53c887ea67c7..b23cab016521 100644 --- a/net/wireless/trace.h +++ b/net/wireless/trace.h @@ -2840,8 +2840,8 @@ TRACE_EVENT(cfg80211_rx_mgmt, __entry->freq = freq; __entry->sig_dbm = sig_dbm; ), - TP_printk(WDEV_PR_FMT ", freq: %d, sig dbm: %d", - WDEV_PR_ARG, __entry->freq, __entry->sig_dbm) + TP_printk(WDEV_PR_FMT ", freq: "KHZ_F", sig dbm: %d", + WDEV_PR_ARG, PR_KHZ(__entry->freq), __entry->sig_dbm) ); TRACE_EVENT(cfg80211_mgmt_tx_status, @@ -2861,6 +2861,23 @@ TRACE_EVENT(cfg80211_mgmt_tx_status, WDEV_PR_ARG, __entry->cookie, BOOL_TO_STR(__entry->ack)) ); +TRACE_EVENT(cfg80211_control_port_tx_status, + TP_PROTO(struct wireless_dev *wdev, u64 cookie, bool ack), + TP_ARGS(wdev, cookie, ack), + TP_STRUCT__entry( + WDEV_ENTRY + __field(u64, cookie) + __field(bool, ack) + ), + TP_fast_assign( + WDEV_ASSIGN; + __entry->cookie = cookie; + __entry->ack = ack; + ), + TP_printk(WDEV_PR_FMT", cookie: %llu, ack: %s", + WDEV_PR_ARG, __entry->cookie, BOOL_TO_STR(__entry->ack)) +); + TRACE_EVENT(cfg80211_rx_control_port, TP_PROTO(struct net_device *netdev, struct sk_buff *skb, bool unencrypted), @@ -3121,8 +3138,8 @@ TRACE_EVENT(cfg80211_report_obss_beacon, __entry->freq = freq; __entry->sig_dbm = sig_dbm; ), - TP_printk(WIPHY_PR_FMT ", freq: %d, sig_dbm: %d", - WIPHY_PR_ARG, __entry->freq, __entry->sig_dbm) + TP_printk(WIPHY_PR_FMT ", freq: "KHZ_F", sig_dbm: %d", + WIPHY_PR_ARG, PR_KHZ(__entry->freq), __entry->sig_dbm) ); TRACE_EVENT(cfg80211_tdls_oper_request, diff --git a/net/wireless/util.c b/net/wireless/util.c index df75e58eca5d..4d3b76f94f55 100644 --- a/net/wireless/util.c +++ b/net/wireless/util.c @@ -92,9 +92,11 @@ u32 ieee80211_channel_to_freq_khz(int chan, enum nl80211_band band) return MHZ_TO_KHZ(5000 + chan * 5); break; case NL80211_BAND_6GHZ: - /* see 802.11ax D4.1 27.3.22.2 */ + /* see 802.11ax D6.1 27.3.23.2 */ + if (chan == 2) + return MHZ_TO_KHZ(5935); if (chan <= 253) - return 5940 + chan * 5; + return MHZ_TO_KHZ(5950 + chan * 5); break; case NL80211_BAND_60GHZ: if (chan < 7) @@ -240,7 +242,9 @@ int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev, int max_key_idx = 5; if (wiphy_ext_feature_isset(&rdev->wiphy, - NL80211_EXT_FEATURE_BEACON_PROTECTION)) + NL80211_EXT_FEATURE_BEACON_PROTECTION) || + wiphy_ext_feature_isset(&rdev->wiphy, + NL80211_EXT_FEATURE_BEACON_PROTECTION_CLIENT)) max_key_idx = 7; if (key_idx < 0 || key_idx > max_key_idx) return -EINVAL; diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c index 19e59d1a5e9f..1bbaf1747e4f 100644 --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -305,8 +305,8 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) { bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG; u32 chunk_size = mr->chunk_size, headroom = mr->headroom; + u64 npgs, addr = mr->addr, size = mr->len; unsigned int chunks, chunks_per_page; - u64 addr = mr->addr, size = mr->len; int err; if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) { @@ -336,6 +336,10 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) if ((addr + size) < addr) return -EINVAL; + npgs = div_u64(size, PAGE_SIZE); + if (npgs > U32_MAX) + return -EINVAL; + chunks = (unsigned int)div_u64(size, chunk_size); if (chunks == 0) return -EINVAL; @@ -352,7 +356,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) umem->size = size; umem->headroom = headroom; umem->chunk_size = chunk_size; - umem->npgs = size / PAGE_SIZE; + umem->npgs = (u32)npgs; umem->pgs = NULL; umem->user = NULL; umem->flags = mr->flags; diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c index 2132a3b6df0f..100e29682b48 100644 --- a/net/xfrm/espintcp.c +++ b/net/xfrm/espintcp.c @@ -390,6 +390,7 @@ static void espintcp_destruct(struct sock *sk) { struct espintcp_ctx *ctx = espintcp_getctx(sk); + ctx->saved_destruct(sk); kfree(ctx); } @@ -445,6 +446,7 @@ static int espintcp_init_sk(struct sock *sk) } ctx->saved_data_ready = sk->sk_data_ready; ctx->saved_write_space = sk->sk_write_space; + ctx->saved_destruct = sk->sk_destruct; sk->sk_data_ready = espintcp_data_ready; sk->sk_write_space = espintcp_write_space; sk->sk_destruct = espintcp_destruct; diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 6cc7f7f1dd68..f50d1f97cf8e 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -25,12 +25,10 @@ static void __xfrm_transport_prep(struct xfrm_state *x, struct sk_buff *skb, struct xfrm_offload *xo = xfrm_offload(skb); skb_reset_mac_len(skb); - pskb_pull(skb, skb->mac_len + hsize + x->props.header_len); - - if (xo->flags & XFRM_GSO_SEGMENT) { - skb_reset_transport_header(skb); + if (xo->flags & XFRM_GSO_SEGMENT) skb->transport_header -= x->props.header_len; - } + + pskb_pull(skb, skb_transport_offset(skb) + x->props.header_len); } static void __xfrm_mode_tunnel_prep(struct xfrm_state *x, struct sk_buff *skb, diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c index 6db266a0cb2d..bd984ff17c2d 100644 --- a/net/xfrm/xfrm_input.c +++ b/net/xfrm/xfrm_input.c @@ -645,7 +645,7 @@ resume: dev_put(skb->dev); spin_lock(&x->lock); - if (nexthdr <= 0) { + if (nexthdr < 0) { if (nexthdr == -EBADMSG) { xfrm_audit_state_icvfail(x, skb, x->type->proto); diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c index 02f8f46d0cc5..c407ecbc5d46 100644 --- a/net/xfrm/xfrm_interface.c +++ b/net/xfrm/xfrm_interface.c @@ -748,7 +748,28 @@ static struct rtnl_link_ops xfrmi_link_ops __read_mostly = { .get_link_net = xfrmi_get_link_net, }; +static void __net_exit xfrmi_exit_batch_net(struct list_head *net_exit_list) +{ + struct net *net; + LIST_HEAD(list); + + rtnl_lock(); + list_for_each_entry(net, net_exit_list, exit_list) { + struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id); + struct xfrm_if __rcu **xip; + struct xfrm_if *xi; + + for (xip = &xfrmn->xfrmi[0]; + (xi = rtnl_dereference(*xip)) != NULL; + xip = &xi->next) + unregister_netdevice_queue(xi->dev, &list); + } + unregister_netdevice_many(&list); + rtnl_unlock(); +} + static struct pernet_operations xfrmi_net_ops = { + .exit_batch = xfrmi_exit_batch_net, .id = &xfrmi_net_id, .size = sizeof(struct xfrmi_net), }; diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c index 9c43b8dd80fb..e4c23f69f69f 100644 --- a/net/xfrm/xfrm_output.c +++ b/net/xfrm/xfrm_output.c @@ -605,18 +605,20 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb) xfrm_state_hold(x); if (skb_is_gso(skb)) { - skb_shinfo(skb)->gso_type |= SKB_GSO_ESP; + if (skb->inner_protocol) + return xfrm_output_gso(net, sk, skb); - return xfrm_output2(net, sk, skb); + skb_shinfo(skb)->gso_type |= SKB_GSO_ESP; + goto out; } if (x->xso.dev && x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM) goto out; + } else { + if (skb_is_gso(skb)) + return xfrm_output_gso(net, sk, skb); } - if (skb_is_gso(skb)) - return xfrm_output_gso(net, sk, skb); - if (skb->ip_summed == CHECKSUM_PARTIAL) { err = skb_checksum_help(skb); if (err) { @@ -753,7 +755,8 @@ void xfrm_local_error(struct sk_buff *skb, int mtu) if (skb->protocol == htons(ETH_P_IP)) proto = AF_INET; - else if (skb->protocol == htons(ETH_P_IPV6)) + else if (skb->protocol == htons(ETH_P_IPV6) && + skb->sk->sk_family == AF_INET6) proto = AF_INET6; else return; diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index 297b2fdb3c29..564aa6492e7c 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -1436,12 +1436,7 @@ static void xfrm_policy_requeue(struct xfrm_policy *old, static bool xfrm_policy_mark_match(struct xfrm_policy *policy, struct xfrm_policy *pol) { - u32 mark = policy->mark.v & policy->mark.m; - - if (policy->mark.v == pol->mark.v && policy->mark.m == pol->mark.m) - return true; - - if ((mark & pol->mark.m) == pol->mark.v && + if (policy->mark.v == pol->mark.v && policy->priority == pol->priority) return true; diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 2be07ed4d70c..bf9e0e87a6ef 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -51,7 +51,7 @@ my %ignore_type = (); my @ignore = (); my $help = 0; my $configuration_file = ".checkpatch.conf"; -my $max_line_length = 80; +my $max_line_length = 100; my $ignore_perl_version = 0; my $minimum_perl_version = 5.10.0; my $min_conf_desc_length = 4; @@ -97,9 +97,11 @@ Options: --types TYPE(,TYPE2...) show only these comma separated message types --ignore TYPE(,TYPE2...) ignore various comma separated message types --show-types show the specific message type in the output - --max-line-length=n set the maximum line length, if exceeded, warn + --max-line-length=n set the maximum line length, (default $max_line_length) + if exceeded, warn on patches + requires --strict for use with --file --min-conf-desc-length=n set the min description length, if shorter, warn - --tab-size=n set the number of spaces for tab (default 8) + --tab-size=n set the number of spaces for tab (default $tabsize) --root=PATH PATH to the kernel tree root --no-summary suppress the per-file summary --mailback only produce a report in case of warnings/errors @@ -3240,8 +3242,10 @@ sub process { if ($msg_type ne "" && (show_type("LONG_LINE") || show_type($msg_type))) { - WARN($msg_type, - "line over $max_line_length characters\n" . $herecurr); + my $msg_level = \&WARN; + $msg_level = \&CHK if ($file); + &{$msg_level}($msg_type, + "line length of $length exceeds $max_line_length columns\n" . $herecurr); } } diff --git a/security/Makefile b/security/Makefile index 22e73a3482bd..3baf435de541 100644 --- a/security/Makefile +++ b/security/Makefile @@ -30,7 +30,7 @@ obj-$(CONFIG_SECURITY_YAMA) += yama/ obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/ obj-$(CONFIG_SECURITY_SAFESETID) += safesetid/ obj-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown/ -obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o +obj-$(CONFIG_CGROUPS) += device_cgroup.o obj-$(CONFIG_BPF_LSM) += bpf/ # Object integrity file lists diff --git a/security/commoncap.c b/security/commoncap.c index f4ee0ae106b2..0ca31c8bc0b1 100644 --- a/security/commoncap.c +++ b/security/commoncap.c @@ -812,6 +812,7 @@ int cap_bprm_set_creds(struct linux_binprm *bprm) int ret; kuid_t root_uid; + new->cap_ambient = old->cap_ambient; if (WARN_ON(!cap_ambient_invariant_ok(old))) return -EPERM; diff --git a/security/device_cgroup.c b/security/device_cgroup.c index 7d0f8f7431ff..43ab0ad45c1b 100644 --- a/security/device_cgroup.c +++ b/security/device_cgroup.c @@ -15,6 +15,8 @@ #include <linux/rcupdate.h> #include <linux/mutex.h> +#ifdef CONFIG_CGROUP_DEVICE + static DEFINE_MUTEX(devcgroup_mutex); enum devcg_behavior { @@ -792,7 +794,7 @@ struct cgroup_subsys devices_cgrp_subsys = { }; /** - * __devcgroup_check_permission - checks if an inode operation is permitted + * devcgroup_legacy_check_permission - checks if an inode operation is permitted * @dev_cgroup: the dev cgroup to be tested against * @type: device type * @major: device major number @@ -801,7 +803,7 @@ struct cgroup_subsys devices_cgrp_subsys = { * * returns 0 on success, -EPERM case the operation is not permitted */ -static int __devcgroup_check_permission(short type, u32 major, u32 minor, +static int devcgroup_legacy_check_permission(short type, u32 major, u32 minor, short access) { struct dev_cgroup *dev_cgroup; @@ -825,6 +827,10 @@ static int __devcgroup_check_permission(short type, u32 major, u32 minor, return 0; } +#endif /* CONFIG_CGROUP_DEVICE */ + +#if defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF) + int devcgroup_check_permission(short type, u32 major, u32 minor, short access) { int rc = BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(type, major, minor, access); @@ -832,6 +838,13 @@ int devcgroup_check_permission(short type, u32 major, u32 minor, short access) if (rc) return -EPERM; - return __devcgroup_check_permission(type, major, minor, access); + #ifdef CONFIG_CGROUP_DEVICE + return devcgroup_legacy_check_permission(type, major, minor, access); + + #else /* CONFIG_CGROUP_DEVICE */ + return 0; + + #endif /* CONFIG_CGROUP_DEVICE */ } EXPORT_SYMBOL(devcgroup_check_permission); +#endif /* defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF) */ diff --git a/sound/core/hwdep.c b/sound/core/hwdep.c index b412d3b3d5ff..21edb8ac95eb 100644 --- a/sound/core/hwdep.c +++ b/sound/core/hwdep.c @@ -216,12 +216,12 @@ static int snd_hwdep_dsp_load(struct snd_hwdep *hw, if (info.index >= 32) return -EINVAL; /* check whether the dsp was already loaded */ - if (hw->dsp_loaded & (1 << info.index)) + if (hw->dsp_loaded & (1u << info.index)) return -EBUSY; err = hw->ops.dsp_load(hw, &info); if (err < 0) return err; - hw->dsp_loaded |= (1 << info.index); + hw->dsp_loaded |= (1u << info.index); return 0; } diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 041d2a32059b..e62d58872b6e 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -384,6 +384,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) case 0x10ec0282: case 0x10ec0283: case 0x10ec0286: + case 0x10ec0287: case 0x10ec0288: case 0x10ec0285: case 0x10ec0298: @@ -5484,18 +5485,9 @@ static void alc_fixup_tpt470_dock(struct hda_codec *codec, { 0x19, 0x21a11010 }, /* dock mic */ { } }; - /* Assure the speaker pin to be coupled with DAC NID 0x03; otherwise - * the speaker output becomes too low by some reason on Thinkpads with - * ALC298 codec - */ - static const hda_nid_t preferred_pairs[] = { - 0x14, 0x03, 0x17, 0x02, 0x21, 0x02, - 0 - }; struct alc_spec *spec = codec->spec; if (action == HDA_FIXUP_ACT_PRE_PROBE) { - spec->gen.preferred_dacs = preferred_pairs; spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; snd_hda_apply_pincfgs(codec, pincfgs); } else if (action == HDA_FIXUP_ACT_INIT) { @@ -5508,6 +5500,23 @@ static void alc_fixup_tpt470_dock(struct hda_codec *codec, } } +static void alc_fixup_tpt470_dacs(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + /* Assure the speaker pin to be coupled with DAC NID 0x03; otherwise + * the speaker output becomes too low by some reason on Thinkpads with + * ALC298 codec + */ + static const hda_nid_t preferred_pairs[] = { + 0x14, 0x03, 0x17, 0x02, 0x21, 0x02, + 0 + }; + struct alc_spec *spec = codec->spec; + + if (action == HDA_FIXUP_ACT_PRE_PROBE) + spec->gen.preferred_dacs = preferred_pairs; +} + static void alc_shutup_dell_xps13(struct hda_codec *codec) { struct alc_spec *spec = codec->spec; @@ -6063,6 +6072,7 @@ enum { ALC700_FIXUP_INTEL_REFERENCE, ALC274_FIXUP_DELL_BIND_DACS, ALC274_FIXUP_DELL_AIO_LINEOUT_VERB, + ALC298_FIXUP_TPT470_DOCK_FIX, ALC298_FIXUP_TPT470_DOCK, ALC255_FIXUP_DUMMY_LINEOUT_VERB, ALC255_FIXUP_DELL_HEADSET_MIC, @@ -6994,12 +7004,18 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC274_FIXUP_DELL_BIND_DACS }, - [ALC298_FIXUP_TPT470_DOCK] = { + [ALC298_FIXUP_TPT470_DOCK_FIX] = { .type = HDA_FIXUP_FUNC, .v.func = alc_fixup_tpt470_dock, .chained = true, .chain_id = ALC293_FIXUP_LENOVO_SPK_NOISE }, + [ALC298_FIXUP_TPT470_DOCK] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc_fixup_tpt470_dacs, + .chained = true, + .chain_id = ALC298_FIXUP_TPT470_DOCK_FIX + }, [ALC255_FIXUP_DUMMY_LINEOUT_VERB] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -7638,6 +7654,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = { {.id = ALC292_FIXUP_TPT440_DOCK, .name = "tpt440-dock"}, {.id = ALC292_FIXUP_TPT440, .name = "tpt440"}, {.id = ALC292_FIXUP_TPT460, .name = "tpt460"}, + {.id = ALC298_FIXUP_TPT470_DOCK_FIX, .name = "tpt470-dock-fix"}, {.id = ALC298_FIXUP_TPT470_DOCK, .name = "tpt470-dock"}, {.id = ALC233_FIXUP_LENOVO_MULTI_CODECS, .name = "dual-codecs"}, {.id = ALC700_FIXUP_INTEL_REFERENCE, .name = "alc700-ref"}, @@ -8276,6 +8293,7 @@ static int patch_alc269(struct hda_codec *codec) case 0x10ec0215: case 0x10ec0245: case 0x10ec0285: + case 0x10ec0287: case 0x10ec0289: spec->codec_variant = ALC269_TYPE_ALC215; spec->shutup = alc225_shutup; @@ -9554,6 +9572,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = { HDA_CODEC_ENTRY(0x10ec0284, "ALC284", patch_alc269), HDA_CODEC_ENTRY(0x10ec0285, "ALC285", patch_alc269), HDA_CODEC_ENTRY(0x10ec0286, "ALC286", patch_alc269), + HDA_CODEC_ENTRY(0x10ec0287, "ALC287", patch_alc269), HDA_CODEC_ENTRY(0x10ec0288, "ALC288", patch_alc269), HDA_CODEC_ENTRY(0x10ec0289, "ALC289", patch_alc269), HDA_CODEC_ENTRY(0x10ec0290, "ALC290", patch_alc269), diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c index a88d7854513b..15769f266790 100644 --- a/sound/usb/mixer.c +++ b/sound/usb/mixer.c @@ -1182,6 +1182,14 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval, cval->res = 384; } break; + case USB_ID(0x0495, 0x3042): /* ESS Technology Asus USB DAC */ + if ((strstr(kctl->id.name, "Playback Volume") != NULL) || + strstr(kctl->id.name, "Capture Volume") != NULL) { + cval->min >>= 8; + cval->max = 0; + cval->res = 1; + } + break; } } diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c index bfdc6ad52785..9af7aa93f6fa 100644 --- a/sound/usb/mixer_maps.c +++ b/sound/usb/mixer_maps.c @@ -397,6 +397,21 @@ static const struct usbmix_connector_map trx40_mobo_connector_map[] = { {} }; +/* Rear panel + front mic on Gigabyte TRX40 Aorus Master with ALC1220-VB */ +static const struct usbmix_name_map aorus_master_alc1220vb_map[] = { + { 17, NULL }, /* OT, IEC958?, disabled */ + { 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */ + { 16, "Line Out" }, /* OT */ + { 22, "Line Out Playback" }, /* FU */ + { 7, "Line" }, /* IT */ + { 19, "Line Capture" }, /* FU */ + { 8, "Mic" }, /* IT */ + { 20, "Mic Capture" }, /* FU */ + { 9, "Front Mic" }, /* IT */ + { 21, "Front Mic Capture" }, /* FU */ + {} +}; + /* * Control map entries */ @@ -526,6 +541,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = { .id = USB_ID(0x1b1c, 0x0a42), .map = corsair_virtuoso_map, }, + { /* Gigabyte TRX40 Aorus Master (rear panel + front mic) */ + .id = USB_ID(0x0414, 0xa001), + .map = aorus_master_alc1220vb_map, + }, { /* Gigabyte TRX40 Aorus Pro WiFi */ .id = USB_ID(0x0414, 0xa002), .map = trx40_mobo_map, diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h index 6313c30f5c85..eb89902a83be 100644 --- a/sound/usb/quirks-table.h +++ b/sound/usb/quirks-table.h @@ -3566,4 +3566,29 @@ ALC1220_VB_DESKTOP(0x0db0, 0x543d), /* MSI TRX40 */ ALC1220_VB_DESKTOP(0x26ce, 0x0a01), /* Asrock TRX40 Creator */ #undef ALC1220_VB_DESKTOP +/* Two entries for Gigabyte TRX40 Aorus Master: + * TRX40 Aorus Master has two USB-audio devices, one for the front headphone + * with ESS SABRE9218 DAC chip, while another for the rest I/O (the rear + * panel and the front mic) with Realtek ALC1220-VB. + * Here we provide two distinct names for making UCM profiles easier. + */ +{ + USB_DEVICE(0x0414, 0xa000), + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { + .vendor_name = "Gigabyte", + .product_name = "Aorus Master Front Headphone", + .profile_name = "Gigabyte-Aorus-Master-Front-Headphone", + .ifnum = QUIRK_NO_INTERFACE + } +}, +{ + USB_DEVICE(0x0414, 0xa001), + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { + .vendor_name = "Gigabyte", + .product_name = "Aorus Master Main Audio", + .profile_name = "Gigabyte-Aorus-Master-Main-Audio", + .ifnum = QUIRK_NO_INTERFACE + } +}, + #undef USB_DEVICE_VENDOR_SPEC diff --git a/tools/arch/x86/include/uapi/asm/unistd.h b/tools/arch/x86/include/uapi/asm/unistd.h index 196fdd02b8b1..30d7d04d72d6 100644 --- a/tools/arch/x86/include/uapi/asm/unistd.h +++ b/tools/arch/x86/include/uapi/asm/unistd.h @@ -3,7 +3,7 @@ #define _UAPI_ASM_X86_UNISTD_H /* x32 syscall flag bit */ -#define __X32_SYSCALL_BIT 0x40000000UL +#define __X32_SYSCALL_BIT 0x40000000 #ifndef __KERNEL__ # ifdef __i386__ diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c index a253a064e6e0..58f4aa593b1b 100644 --- a/tools/testing/selftests/bpf/verifier/bounds.c +++ b/tools/testing/selftests/bpf/verifier/bounds.c @@ -238,7 +238,7 @@ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), BPF_LD_MAP_FD(BPF_REG_1, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), /* r1 = [0x00, 0xff] */ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1), @@ -253,10 +253,6 @@ * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff] */ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = 0 or - * [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff] - */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8), /* error on OOB pointer computation */ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), /* exit */ @@ -265,8 +261,10 @@ }, .fixup_map_hash_8b = { 3 }, /* not actually fully unbounded, but the bound is very high */ - .errstr = "value 72057594021150720 makes map_value pointer be out of bounds", - .result = REJECT + .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root", + .result_unpriv = REJECT, + .errstr = "value -4294967168 makes map_value pointer be out of bounds", + .result = REJECT, }, { "bounds check after truncation of boundary-crossing range (2)", @@ -276,7 +274,7 @@ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), BPF_LD_MAP_FD(BPF_REG_1, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), /* r1 = [0x00, 0xff] */ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1), @@ -293,10 +291,6 @@ * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff] */ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1), - /* r1 = 0 or - * [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff] - */ - BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8), /* error on OOB pointer computation */ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), /* exit */ @@ -305,8 +299,10 @@ }, .fixup_map_hash_8b = { 3 }, /* not actually fully unbounded, but the bound is very high */ - .errstr = "value 72057594021150720 makes map_value pointer be out of bounds", - .result = REJECT + .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root", + .result_unpriv = REJECT, + .errstr = "value -4294967168 makes map_value pointer be out of bounds", + .result = REJECT, }, { "bounds check after wrapping 32-bit addition", @@ -539,3 +535,25 @@ }, .result = ACCEPT }, +{ + "assigning 32bit bounds to 64bit for wA = 0, wB = wA", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, + offsetof(struct __sk_buff, data_end)), + BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, + offsetof(struct __sk_buff, data)), + BPF_MOV32_IMM(BPF_REG_9, 0), + BPF_MOV32_REG(BPF_REG_2, BPF_REG_9), + BPF_MOV64_REG(BPF_REG_6, BPF_REG_7), + BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_2), + BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8), + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_8, 1), + BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_6, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, + .result = ACCEPT, + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, +}, diff --git a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_control.sh b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_control.sh new file mode 100755 index 000000000000..a37273473c1b --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_control.sh @@ -0,0 +1,688 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Test devlink-trap control trap functionality over mlxsw. Each registered +# control packet trap is tested to make sure it is triggered under the right +# conditions. +# +# +---------------------------------+ +# | H1 (vrf) | +# | + $h1 | +# | | 192.0.2.1/24 | +# | | 2001:db8:1::1/64 | +# | | | +# | | default via 192.0.2.2 | +# | | default via 2001:db8:1::2 | +# +----|----------------------------+ +# | +# +----|----------------------------------------------------------------------+ +# | SW | | +# | + $rp1 | +# | 192.0.2.2/24 | +# | 2001:db8:1::2/64 | +# | | +# | 2001:db8:2::2/64 | +# | 198.51.100.2/24 | +# | + $rp2 | +# | | | +# +----|----------------------------------------------------------------------+ +# | +# +----|----------------------------+ +# | | default via 198.51.100.2 | +# | | default via 2001:db8:2::2 | +# | | | +# | | 2001:db8:2::1/64 | +# | | 198.51.100.1/24 | +# | + $h2 | +# | H2 (vrf) | +# +---------------------------------+ + +lib_dir=$(dirname $0)/../../../net/forwarding + +ALL_TESTS=" + stp_test + lacp_test + lldp_test + igmp_query_test + igmp_v1_report_test + igmp_v2_report_test + igmp_v3_report_test + igmp_v2_leave_test + mld_query_test + mld_v1_report_test + mld_v2_report_test + mld_v1_done_test + ipv4_dhcp_test + ipv6_dhcp_test + arp_request_test + arp_response_test + ipv6_neigh_solicit_test + ipv6_neigh_advert_test + ipv4_bfd_test + ipv6_bfd_test + ipv4_ospf_test + ipv6_ospf_test + ipv4_bgp_test + ipv6_bgp_test + ipv4_vrrp_test + ipv6_vrrp_test + ipv4_pim_test + ipv6_pim_test + uc_loopback_test + local_route_test + external_route_test + ipv6_uc_dip_link_local_scope_test + ipv4_router_alert_test + ipv6_router_alert_test + ipv6_dip_all_nodes_test + ipv6_dip_all_routers_test + ipv6_router_solicit_test + ipv6_router_advert_test + ipv6_redirect_test + ptp_event_test + ptp_general_test + flow_action_sample_test + flow_action_trap_test +" +NUM_NETIFS=4 +source $lib_dir/lib.sh +source $lib_dir/devlink_lib.sh + +h1_create() +{ + simple_if_init $h1 192.0.2.1/24 2001:db8:1::1/64 + + ip -4 route add default vrf v$h1 nexthop via 192.0.2.2 + ip -6 route add default vrf v$h1 nexthop via 2001:db8:1::2 +} + +h1_destroy() +{ + ip -6 route del default vrf v$h1 nexthop via 2001:db8:1::2 + ip -4 route del default vrf v$h1 nexthop via 192.0.2.2 + + simple_if_fini $h1 192.0.2.1/24 2001:db8:1::1/64 +} + +h2_create() +{ + simple_if_init $h2 198.51.100.1/24 2001:db8:2::1/64 + + ip -4 route add default vrf v$h2 nexthop via 198.51.100.2 + ip -6 route add default vrf v$h2 nexthop via 2001:db8:2::2 +} + +h2_destroy() +{ + ip -6 route del default vrf v$h2 nexthop via 2001:db8:2::2 + ip -4 route del default vrf v$h2 nexthop via 198.51.100.2 + + simple_if_fini $h2 198.51.100.1/24 2001:db8:2::1/64 +} + +router_create() +{ + ip link set dev $rp1 up + ip link set dev $rp2 up + + __addr_add_del $rp1 add 192.0.2.2/24 2001:db8:1::2/64 + __addr_add_del $rp2 add 198.51.100.2/24 2001:db8:2::2/64 +} + +router_destroy() +{ + __addr_add_del $rp2 del 198.51.100.2/24 2001:db8:2::2/64 + __addr_add_del $rp1 del 192.0.2.2/24 2001:db8:1::2/64 + + ip link set dev $rp2 down + ip link set dev $rp1 down +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + rp1=${NETIFS[p2]} + + rp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + vrf_prepare + forwarding_enable + + h1_create + h2_create + router_create +} + +cleanup() +{ + pre_cleanup + + router_destroy + h2_destroy + h1_destroy + + forwarding_restore + vrf_cleanup +} + +stp_test() +{ + devlink_trap_stats_test "STP" "stp" $MZ $h1 -c 1 -t bpdu -q +} + +lacp_payload_get() +{ + local source_mac=$1; shift + local p + + p=$(: + )"01:80:C2:00:00:02:"$( : ETH daddr + )"$source_mac:"$( : ETH saddr + )"88:09:"$( : ETH type + ) + echo $p +} + +lacp_test() +{ + local h1mac=$(mac_get $h1) + + devlink_trap_stats_test "LACP" "lacp" $MZ $h1 -c 1 \ + $(lacp_payload_get $h1mac) -p 100 -q +} + +lldp_payload_get() +{ + local source_mac=$1; shift + local p + + p=$(: + )"01:80:C2:00:00:0E:"$( : ETH daddr + )"$source_mac:"$( : ETH saddr + )"88:CC:"$( : ETH type + ) + echo $p +} + +lldp_test() +{ + local h1mac=$(mac_get $h1) + + devlink_trap_stats_test "LLDP" "lldp" $MZ $h1 -c 1 \ + $(lldp_payload_get $h1mac) -p 100 -q +} + +igmp_query_test() +{ + # IGMP (IP Protocol 2) Membership Query (Type 0x11) + devlink_trap_stats_test "IGMP Membership Query" "igmp_query" \ + $MZ $h1 -c 1 -a own -b 01:00:5E:00:00:01 \ + -A 192.0.2.1 -B 224.0.0.1 -t ip proto=2,p=11 -p 100 -q +} + +igmp_v1_report_test() +{ + # IGMP (IP Protocol 2) Version 1 Membership Report (Type 0x12) + devlink_trap_stats_test "IGMP Version 1 Membership Report" \ + "igmp_v1_report" $MZ $h1 -c 1 -a own -b 01:00:5E:00:00:01 \ + -A 192.0.2.1 -B 244.0.0.1 -t ip proto=2,p=12 -p 100 -q +} + +igmp_v2_report_test() +{ + # IGMP (IP Protocol 2) Version 2 Membership Report (Type 0x16) + devlink_trap_stats_test "IGMP Version 2 Membership Report" \ + "igmp_v2_report" $MZ $h1 -c 1 -a own -b 01:00:5E:00:00:01 \ + -A 192.0.2.1 -B 244.0.0.1 -t ip proto=2,p=16 -p 100 -q +} + +igmp_v3_report_test() +{ + # IGMP (IP Protocol 2) Version 3 Membership Report (Type 0x22) + devlink_trap_stats_test "IGMP Version 3 Membership Report" \ + "igmp_v3_report" $MZ $h1 -c 1 -a own -b 01:00:5E:00:00:01 \ + -A 192.0.2.1 -B 244.0.0.1 -t ip proto=2,p=22 -p 100 -q +} + +igmp_v2_leave_test() +{ + # IGMP (IP Protocol 2) Version 2 Leave Group (Type 0x17) + devlink_trap_stats_test "IGMP Version 2 Leave Group" \ + "igmp_v2_leave" $MZ $h1 -c 1 -a own -b 01:00:5E:00:00:02 \ + -A 192.0.2.1 -B 224.0.0.2 -t ip proto=2,p=17 -p 100 -q +} + +mld_payload_get() +{ + local type=$1; shift + local p + + type=$(printf "%x" $type) + p=$(: + )"3A:"$( : Next Header - ICMPv6 + )"00:"$( : Hdr Ext Len + )"00:00:00:00:00:00:"$( : Options and Padding + )"$type:"$( : ICMPv6.type + )"00:"$( : ICMPv6.code + )"00:"$( : ICMPv6.checksum + ) + echo $p +} + +mld_query_test() +{ + # MLD Multicast Listener Query (Type 130) + devlink_trap_stats_test "MLD Multicast Listener Query" "mld_query" \ + $MZ $h1 -6 -c 1 -A fe80::1 -B ff02::1 \ + -t ip hop=1,next=0,payload=$(mld_payload_get 130) -p 100 -q +} + +mld_v1_report_test() +{ + # MLD Version 1 Multicast Listener Report (Type 131) + devlink_trap_stats_test "MLD Version 1 Multicast Listener Report" \ + "mld_v1_report" $MZ $h1 -6 -c 1 -A fe80::1 -B ff02::16 \ + -t ip hop=1,next=0,payload=$(mld_payload_get 131) -p 100 -q +} + +mld_v2_report_test() +{ + # MLD Version 2 Multicast Listener Report (Type 143) + devlink_trap_stats_test "MLD Version 2 Multicast Listener Report" \ + "mld_v2_report" $MZ $h1 -6 -c 1 -A fe80::1 -B ff02::16 \ + -t ip hop=1,next=0,payload=$(mld_payload_get 143) -p 100 -q +} + +mld_v1_done_test() +{ + # MLD Version 1 Multicast Listener Done (Type 132) + devlink_trap_stats_test "MLD Version 1 Multicast Listener Done" \ + "mld_v1_done" $MZ $h1 -6 -c 1 -A fe80::1 -B ff02::16 \ + -t ip hop=1,next=0,payload=$(mld_payload_get 132) -p 100 -q +} + +ipv4_dhcp_test() +{ + devlink_trap_stats_test "IPv4 DHCP Port 67" "ipv4_dhcp" \ + $MZ $h1 -c 1 -a own -b bcast -A 0.0.0.0 -B 255.255.255.255 \ + -t udp sp=68,dp=67 -p 100 -q + + devlink_trap_stats_test "IPv4 DHCP Port 68" "ipv4_dhcp" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) -A 192.0.2.1 \ + -B 255.255.255.255 -t udp sp=67,dp=68 -p 100 -q +} + +ipv6_dhcp_test() +{ + devlink_trap_stats_test "IPv6 DHCP Port 547" "ipv6_dhcp" \ + $MZ $h1 -6 -c 1 -A fe80::1 -B ff02::1:2 -t udp sp=546,dp=547 \ + -p 100 -q + + devlink_trap_stats_test "IPv6 DHCP Port 546" "ipv6_dhcp" \ + $MZ $h1 -6 -c 1 -A fe80::1 -B ff02::1:2 -t udp sp=547,dp=546 \ + -p 100 -q +} + +arp_request_test() +{ + devlink_trap_stats_test "ARP Request" "arp_request" \ + $MZ $h1 -c 1 -a own -b bcast -t arp request -p 100 -q +} + +arp_response_test() +{ + devlink_trap_stats_test "ARP Response" "arp_response" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) -t arp reply -p 100 -q +} + +icmpv6_header_get() +{ + local type=$1; shift + local p + + type=$(printf "%x" $type) + p=$(: + )"$type:"$( : ICMPv6.type + )"00:"$( : ICMPv6.code + )"00:"$( : ICMPv6.checksum + ) + echo $p +} + +ipv6_neigh_solicit_test() +{ + devlink_trap_stats_test "IPv6 Neighbour Solicitation" \ + "ipv6_neigh_solicit" $MZ $h1 -6 -c 1 \ + -A fe80::1 -B ff02::1:ff00:02 \ + -t ip hop=1,next=58,payload=$(icmpv6_header_get 135) -p 100 -q +} + +ipv6_neigh_advert_test() +{ + devlink_trap_stats_test "IPv6 Neighbour Advertisement" \ + "ipv6_neigh_advert" $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A fe80::1 -B 2001:db8:1::2 \ + -t ip hop=1,next=58,payload=$(icmpv6_header_get 136) -p 100 -q +} + +ipv4_bfd_test() +{ + devlink_trap_stats_test "IPv4 BFD Control - Port 3784" "ipv4_bfd" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 192.0.2.2 -t udp sp=49153,dp=3784 -p 100 -q + + devlink_trap_stats_test "IPv4 BFD Echo - Port 3785" "ipv4_bfd" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 192.0.2.2 -t udp sp=49153,dp=3785 -p 100 -q +} + +ipv6_bfd_test() +{ + devlink_trap_stats_test "IPv6 BFD Control - Port 3784" "ipv6_bfd" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::1 -B 2001:db8:1::2 \ + -t udp sp=49153,dp=3784 -p 100 -q + + devlink_trap_stats_test "IPv6 BFD Echo - Port 3785" "ipv6_bfd" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::1 -B 2001:db8:1::2 \ + -t udp sp=49153,dp=3785 -p 100 -q +} + +ipv4_ospf_test() +{ + devlink_trap_stats_test "IPv4 OSPF - Multicast" "ipv4_ospf" \ + $MZ $h1 -c 1 -a own -b 01:00:5e:00:00:05 \ + -A 192.0.2.1 -B 224.0.0.5 -t ip proto=89 -p 100 -q + + devlink_trap_stats_test "IPv4 OSPF - Unicast" "ipv4_ospf" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 192.0.2.2 -t ip proto=89 -p 100 -q +} + +ipv6_ospf_test() +{ + devlink_trap_stats_test "IPv6 OSPF - Multicast" "ipv6_ospf" \ + $MZ $h1 -6 -c 1 -a own -b 33:33:00:00:00:05 \ + -A fe80::1 -B ff02::5 -t ip next=89 -p 100 -q + + devlink_trap_stats_test "IPv6 OSPF - Unicast" "ipv6_ospf" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::1 -B 2001:db8:1::2 -t ip next=89 -p 100 -q +} + +ipv4_bgp_test() +{ + devlink_trap_stats_test "IPv4 BGP" "ipv4_bgp" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 192.0.2.2 -t tcp sp=54321,dp=179,flags=rst \ + -p 100 -q +} + +ipv6_bgp_test() +{ + devlink_trap_stats_test "IPv6 BGP" "ipv6_bgp" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::1 -B 2001:db8:1::2 \ + -t tcp sp=54321,dp=179,flags=rst -p 100 -q +} + +ipv4_vrrp_test() +{ + devlink_trap_stats_test "IPv4 VRRP" "ipv4_vrrp" \ + $MZ $h1 -c 1 -a own -b 01:00:5e:00:00:12 \ + -A 192.0.2.1 -B 224.0.0.18 -t ip proto=112 -p 100 -q +} + +ipv6_vrrp_test() +{ + devlink_trap_stats_test "IPv6 VRRP" "ipv6_vrrp" \ + $MZ $h1 -6 -c 1 -a own -b 33:33:00:00:00:12 \ + -A fe80::1 -B ff02::12 -t ip next=112 -p 100 -q +} + +ipv4_pim_test() +{ + devlink_trap_stats_test "IPv4 PIM - Multicast" "ipv4_pim" \ + $MZ $h1 -c 1 -a own -b 01:00:5e:00:00:0d \ + -A 192.0.2.1 -B 224.0.0.13 -t ip proto=103 -p 100 -q + + devlink_trap_stats_test "IPv4 PIM - Unicast" "ipv4_pim" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 192.0.2.2 -t ip proto=103 -p 100 -q +} + +ipv6_pim_test() +{ + devlink_trap_stats_test "IPv6 PIM - Multicast" "ipv6_pim" \ + $MZ $h1 -6 -c 1 -a own -b 33:33:00:00:00:0d \ + -A fe80::1 -B ff02::d -t ip next=103 -p 100 -q + + devlink_trap_stats_test "IPv6 PIM - Unicast" "ipv6_pim" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A fe80::1 -B 2001:db8:1::2 -t ip next=103 -p 100 -q +} + +uc_loopback_test() +{ + # Add neighbours to the fake destination IPs, so that the packets are + # routed in the device and not trapped due to an unresolved neighbour + # exception. + ip -4 neigh add 192.0.2.3 lladdr 00:11:22:33:44:55 nud permanent \ + dev $rp1 + ip -6 neigh add 2001:db8:1::3 lladdr 00:11:22:33:44:55 nud permanent \ + dev $rp1 + + devlink_trap_stats_test "IPv4 Unicast Loopback" "uc_loopback" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 192.0.2.3 -t udp sp=54321,dp=12345 -p 100 -q + + devlink_trap_stats_test "IPv6 Unicast Loopback" "uc_loopback" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::1 -B 2001:db8:1::3 -t udp sp=54321,dp=12345 \ + -p 100 -q + + ip -6 neigh del 2001:db8:1::3 dev $rp1 + ip -4 neigh del 192.0.2.3 dev $rp1 +} + +local_route_test() +{ + # Use a fake source IP to prevent the trap from being triggered twice + # when the router sends back a port unreachable message. + devlink_trap_stats_test "IPv4 Local Route" "local_route" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.3 -B 192.0.2.2 -t udp sp=54321,dp=12345 -p 100 -q + + devlink_trap_stats_test "IPv6 Local Route" "local_route" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::3 -B 2001:db8:1::2 -t udp sp=54321,sp=12345 \ + -p 100 -q +} + +external_route_test() +{ + # Add a dummy device through which the incoming packets should be + # routed. + ip link add name dummy10 up type dummy + ip address add 203.0.113.1/24 dev dummy10 + ip -6 address add 2001:db8:10::1/64 dev dummy10 + + devlink_trap_stats_test "IPv4 External Route" "external_route" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 203.0.113.2 -t udp sp=54321,dp=12345 -p 100 -q + + devlink_trap_stats_test "IPv6 External Route" "external_route" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::1 -B 2001:db8:10::2 -t udp sp=54321,sp=12345 \ + -p 100 -q + + ip -6 address del 2001:db8:10::1/64 dev dummy10 + ip address del 203.0.113.1/24 dev dummy10 + ip link del dev dummy10 +} + +ipv6_uc_dip_link_local_scope_test() +{ + # Add a dummy link-local prefix route to allow the packet to be routed. + ip -6 route add fe80:1::/64 dev $rp2 + + devlink_trap_stats_test \ + "IPv6 Unicast Destination IP With Link-Local Scope" \ + "ipv6_uc_dip_link_local_scope" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A fe80::1 -B fe80:1::2 -t udp sp=54321,sp=12345 \ + -p 100 -q + + ip -6 route del fe80:1::/64 dev $rp2 +} + +ipv4_router_alert_get() +{ + local p + + # https://en.wikipedia.org/wiki/IPv4#Options + p=$(: + )"94:"$( : Option Number + )"04:"$( : Option Length + )"00:00:"$( : Option Data + ) + echo $p +} + +ipv4_router_alert_test() +{ + devlink_trap_stats_test "IPv4 Router Alert" "ipv4_router_alert" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 198.51.100.3 \ + -t ip option=$(ipv4_router_alert_get) -p 100 -q +} + +ipv6_router_alert_get() +{ + local p + + # https://en.wikipedia.org/wiki/IPv6_packet#Hop-by-hop_options_and_destination_options + # https://tools.ietf.org/html/rfc2711#section-2.1 + p=$(: + )"11:"$( : Next Header - UDP + )"00:"$( : Hdr Ext Len + )"05:02:00:00:00:00:"$( : Option Data + ) + echo $p +} + +ipv6_router_alert_test() +{ + devlink_trap_stats_test "IPv6 Router Alert" "ipv6_router_alert" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A 2001:db8:1::1 -B 2001:db8:1::3 \ + -t ip next=0,payload=$(ipv6_router_alert_get) -p 100 -q +} + +ipv6_dip_all_nodes_test() +{ + devlink_trap_stats_test "IPv6 Destination IP \"All Nodes Address\"" \ + "ipv6_dip_all_nodes" \ + $MZ $h1 -6 -c 1 -a own -b 33:33:00:00:00:01 \ + -A 2001:db8:1::1 -B ff02::1 -t udp sp=12345,dp=54321 -p 100 -q +} + +ipv6_dip_all_routers_test() +{ + devlink_trap_stats_test "IPv6 Destination IP \"All Routers Address\"" \ + "ipv6_dip_all_routers" \ + $MZ $h1 -6 -c 1 -a own -b 33:33:00:00:00:02 \ + -A 2001:db8:1::1 -B ff02::2 -t udp sp=12345,dp=54321 -p 100 -q +} + +ipv6_router_solicit_test() +{ + devlink_trap_stats_test "IPv6 Router Solicitation" \ + "ipv6_router_solicit" \ + $MZ $h1 -6 -c 1 -a own -b 33:33:00:00:00:02 \ + -A fe80::1 -B ff02::2 \ + -t ip hop=1,next=58,payload=$(icmpv6_header_get 133) -p 100 -q +} + +ipv6_router_advert_test() +{ + devlink_trap_stats_test "IPv6 Router Advertisement" \ + "ipv6_router_advert" \ + $MZ $h1 -6 -c 1 -a own -b 33:33:00:00:00:01 \ + -A fe80::1 -B ff02::1 \ + -t ip hop=1,next=58,payload=$(icmpv6_header_get 134) -p 100 -q +} + +ipv6_redirect_test() +{ + devlink_trap_stats_test "IPv6 Redirect Message" \ + "ipv6_redirect" \ + $MZ $h1 -6 -c 1 -a own -b $(mac_get $rp1) \ + -A fe80::1 -B 2001:db8:1::2 \ + -t ip hop=1,next=58,payload=$(icmpv6_header_get 137) -p 100 -q +} + +ptp_event_test() +{ + # PTP is only supported on Spectrum-1, for now. + [[ "$DEVLINK_VIDDID" != "15b3:cb84" ]] && return + + # PTP Sync (0) + devlink_trap_stats_test "PTP Time-Critical Event Message" "ptp_event" \ + $MZ $h1 -c 1 -a own -b 01:00:5e:00:01:81 \ + -A 192.0.2.1 -B 224.0.1.129 \ + -t udp sp=12345,dp=319,payload=10 -p 100 -q +} + +ptp_general_test() +{ + # PTP is only supported on Spectrum-1, for now. + [[ "$DEVLINK_VIDDID" != "15b3:cb84" ]] && return + + # PTP Announce (b) + devlink_trap_stats_test "PTP General Message" "ptp_general" \ + $MZ $h1 -c 1 -a own -b 01:00:5e:00:01:81 \ + -A 192.0.2.1 -B 224.0.1.129 \ + -t udp sp=12345,dp=320,payload=1b -p 100 -q +} + +flow_action_sample_test() +{ + # Install a filter that samples every incoming packet. + tc qdisc add dev $rp1 clsact + tc filter add dev $rp1 ingress proto all pref 1 handle 101 matchall \ + skip_sw action sample rate 1 group 1 + + devlink_trap_stats_test "Flow Sampling" "flow_action_sample" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 198.51.100.1 -t udp sp=12345,dp=54321 -p 100 -q + + tc filter del dev $rp1 ingress proto all pref 1 handle 101 matchall + tc qdisc del dev $rp1 clsact +} + +flow_action_trap_test() +{ + # Install a filter that traps a specific flow. + tc qdisc add dev $rp1 clsact + tc filter add dev $rp1 ingress proto ip pref 1 handle 101 flower \ + skip_sw ip_proto udp src_port 12345 dst_port 54321 action trap + + devlink_trap_stats_test "Flow Trapping (Logging)" "flow_action_trap" \ + $MZ $h1 -c 1 -a own -b $(mac_get $rp1) \ + -A 192.0.2.1 -B 198.51.100.1 -t udp sp=12345,dp=54321 -p 100 -q + + tc filter del dev $rp1 ingress proto ip pref 1 handle 101 flower + tc qdisc del dev $rp1 clsact +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/devlink_lib.sh b/tools/testing/selftests/net/forwarding/devlink_lib.sh index e27236109235..f0e6be4c09e9 100644 --- a/tools/testing/selftests/net/forwarding/devlink_lib.sh +++ b/tools/testing/selftests/net/forwarding/devlink_lib.sh @@ -423,6 +423,29 @@ devlink_trap_drop_cleanup() tc filter del dev $dev egress protocol $proto pref $pref handle $handle flower } +devlink_trap_stats_test() +{ + local test_name=$1; shift + local trap_name=$1; shift + local send_one="$@" + local t0_packets + local t1_packets + + RET=0 + + t0_packets=$(devlink_trap_rx_packets_get $trap_name) + + $send_one && sleep 1 + + t1_packets=$(devlink_trap_rx_packets_get $trap_name) + + if [[ $t1_packets -eq $t0_packets ]]; then + check_err 1 "Trap stats did not increase" + fi + + log_test "$test_name" +} + devlink_trap_policers_num_get() { devlink -j -p trap policer show | jq '.[]["'$DEVLINK_DEV'"] | length' diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json new file mode 100644 index 000000000000..1cda2e11b3ad --- /dev/null +++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json @@ -0,0 +1,21 @@ +[ + { + "id": "83be", + "name": "Create FQ-PIE with invalid number of flows", + "category": [ + "qdisc", + "fq_pie" + ], + "setup": [ + "$IP link add dev $DUMMY type dummy || /bin/true" + ], + "cmdUnderTest": "$TC qdisc add dev $DUMMY root fq_pie flows 65536", + "expExitCode": "2", + "verifyCmd": "$TC qdisc show dev $DUMMY", + "matchPattern": "qdisc", + "matchCount": "0", + "teardown": [ + "$IP link del dev $DUMMY" + ] + } +] |