summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel/ixgbe
AgeCommit message (Collapse)Author
2012-09-13Merge commit 'v3.6-rc5' into nextBjorn Helgaas
* commit 'v3.6-rc5': (1098 commits) Linux 3.6-rc5 HID: tpkbd: work even if the new Lenovo Keyboard driver is not configured Remove user-triggerable BUG from mpol_to_str xen/pciback: Fix proper FLR steps. uml: fix compile error in deliver_alarm() dj: memory scribble in logi_dj Fix order of arguments to compat_put_time[spec|val] xen: Use correct masking in xen_swiotlb_alloc_coherent. xen: fix logical error in tlb flushing xen/p2m: Fix one-off error in checking the P2M tree directory. powerpc: Don't use __put_user() in patch_instruction powerpc: Make sure IPI handlers see data written by IPI senders powerpc: Restore correct DSCR in context switch powerpc: Fix DSCR inheritance in copy_thread() powerpc: Keep thread.dscr and thread.dscr_inherit in sync powerpc: Update DSCR on all CPUs when writing sysfs dscr_default powerpc/powernv: Always go into nap mode when CPU is offline powerpc: Give hypervisor decrementer interrupts their own handler powerpc/vphn: Fix arch_update_cpu_topology() return value ARM: gemini: fix the gemini build ... Conflicts: drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c drivers/rapidio/devices/tsi721.c
2012-09-12Merge branch 'pci/stephen-const' into nextBjorn Helgaas
* pci/stephen-const: make drivers with pci error handlers const scsi: make pci error handlers const netdev: make pci_error_handlers const PCI: Make pci_error_handlers const
2012-09-07netdev: make pci_error_handlers constStephen Hemminger
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2012-09-04ixgbe: remove old init remnantEliezer Tamir
Remove a for loop that does nothing in ixgbe_probe(). This is a remnant from when we had IO bars (compare to the ixgb code). Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-08-23PCI: Introduce pci_pcie_type(dev) to replace pci_dev->pcie_typeYijing Wang
Introduce an inline function pci_pcie_type(dev) to extract PCIe device type from pci_dev->pcie_flags_reg field, and prepare for removing pci_dev->pcie_type. Signed-off-by: Yijing Wang <wangyijing@huawei.com> Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2012-08-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
2012-08-16ixgbe: Rewrite code related to configuring IFCS bit in Tx descriptorAlexander Duyck
This change updates the code related to configuring the transmit frame checksum. Specifically I have updated the code so that we can only skip inserting the checksum in the case that we are not performing some other offload that will modify the frame data. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: Roll RSC code into non-EOP codeAlexander Duyck
This change moves the RSC code into the non-EOP descriptor handling function. The main motivation behind this change is to help reduce the overhead in the non-RSC case. Previously the non-RSC path code would always be checking for append count even if RSC had been disabled. Now this code is completely skipped in a single conditional check instead of having to make two separate checks. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: Make allocating skb and placing data in it a separate functionAlexander Duyck
This patch creates a function named ixgbe_fetch_rx_buffer. The sole purpose of this function is to retrieve a single buffer off of the ring and to place it in an skb. The advantage to doing this is that it helps improve the readability since I can decrease the indentation and for the code in this section. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: Copybreak sooner to avoid get_page/put_page and offset change overheadAlexander Duyck
This change makes it so that if only the first 256 bytes of a buffer are used we just copy the data out and leave the offset and page count unchanged. There are multiple advantages to this. First it allows us to reuse the page much more in the case of pages larger than 4K. It also allows us to avoid some expensive atomic operations in the form of get_page/put_page. In perf I have seen CPU utilization for put_page drop from 3.5% to 1.8% as a result of this patch when doing small packet routing, and packet rates increased by about 3%. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: Make pull tail function separate from rest of cleanup_headersAlexander Duyck
This change creates a separate function for functionality similar to pskb_pull_tail. The main motivation for moving it to a separate function is so that later I can just skip this function in the case where we have already copied the buffer into skb->head. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: Have the CPU take ownership of the buffers soonerAlexander Duyck
This patch makes it so that we will always have ownership of the buffers by the time we get to ixgbe_add_rx_frag. This is necessary as I am planning to add a copy-break to ixgbe_add_rx_frag and in order for that to function correctly we need the CPU to have ownership of the buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: Only use double buffering if page size is less than 8KAlexander Duyck
This change makes it so that we do not use double buffering if the page size is larger than 4K. Instead we will simply walk through the page using up to 3K per receive, and if we receive less than we only move the offset by that amount. We will free the page when there is no longer any space left that we can use instead of checking the page count to see if we can cycle back to the start. The main motivation behind this is to avoid the unnecessary truesize cost for using a half page when most packets are 2K or smaller. With this new approach the largest possible truesize for a page fragment will be 3K when PAGE_SIZE is larger than 4K. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: combine ixgbe_add_rx_frag and ixgbe_can_reuse_pageAlexander Duyck
This patch combines ixgbe_add_rx_frag and ixgbe_can_reuse_page into a single function. The main motivation behind this is to make better use of the values so that we don't have to load them from memory and into registers twice. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-16ixgbe: Remove code that was initializing Rx page offsetAlexander Duyck
This change reverts an earlier patch that introduced ixgbe_init_rx_page_offset. The idea behind the function was to provide some variation in the starting offset for the page in order to reduce hot-spots in the cache. However it doesn't appear to provide any significant benefit in the testing I have done. It has however been a source of several bugs, and it blocks us from being able to use 2K fragments on larger page sizes. So the decision I made was to remove it. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
2012-08-10ixgbe: add missing bracesEmil Tantilov
This patch adds missing braces around the 10gig link check to include the check for KR support. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Reported-by: Sascha Wildner <saw@online.de> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-31netvm: propagate page->pfmemalloc from skb_alloc_page to skbMel Gorman
The skb->pfmemalloc flag gets set to true iff during the slab allocation of data in __alloc_skb that the the PFMEMALLOC reserves were used. If page splitting is used, it is possible that pages will be allocated from the PFMEMALLOC reserve without propagating this information to the skb. This patch propagates page->pfmemalloc from pages allocated for fragments to the skb. It works by reintroducing and expanding the skb_alloc_page() API to take an skb. If the page was allocated from pfmemalloc reserves, it is automatically copied. If the driver allocates the page before the skb, it should call skb_propagate_pfmemalloc() after the skb is allocated to ensure the flag is copied properly. Failure to do so is not critical. The resulting driver may perform slower if it is used for swap-over-NBD or swap-over-NFS but it should not result in failure. [davem@davemloft.net: API rename and consistency] Signed-off-by: Mel Gorman <mgorman@suse.de> Acked-by: David S. Miller <davem@davemloft.net> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: Eric B Munson <emunson@mgebm.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Cc: Mel Gorman <mgorman@suse.de> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-07-26ixgbe: fix panic while dumping packets on Tx hang with IOMMUEmil Tantilov
This patch resolves a "BUG: unable to handle kernel paging request at ..." oops while dumping packet data. The issue occurs with IOMMU enabled due to the address provided by phys_to_virt(). This patch makes use of skb->data on Tx and the virtual address of the pages allocated for Rx. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-22ixgbe: Fix build with PCI_IOV enabled.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-21ixgbe: Use 1TC DCB instead of disabling DCB for MSI and legacy interruptsAlexander Duyck
This change makes it so that we can use 1TC DCB in the case of MSI and legacy interrupts. The advantage to this is that it allows us to fully support FCoE w/ DCB instead of having to drop to link flow control only when using these interrupt modes. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: add support for new 82599 deviceDon Skidmore
This patch adds support for a new 82599 device that supports WoL. Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: remove extra unused queues in DCB + FCoE caseJohn Fastabend
With DCB and FCoE configured extra queues may be allocated and never used. After this patch we calculate the max correctly. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: fix RAR entry counting for generic and fdb_add()John Fastabend
Do RAR entry accounting correctly so that errors are reported and promisc mode is set correctly when the number of entries exceeds the hardware limits. This can happen with many macvlan devices attached to the PF or by adding many fdb entries in SR-IOV modes. Also this includes a small refactor to fdb_add() to avoid having so many nested if/else statements after adding a check for the number or RAR entries. The max entries for the PF is currently 16 we allow 15 additional entries to account for the defined MAC. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: Use num_tcs.pg_tcs as upper limit for TC when checking based on UPAlexander Duyck
This change makes it so the function ixgbe_dcb_get_tc_from_up will use the num_tcs.pg_tcs to determine the starting value for determining a traffic class based on a user priority. The main motivation for this change is to address possible bad configurations in which more TCs worth of data are populated then there are actual TCs. By limiting this value we can at least make certain we are not providing a map with values that are out of range. As a result any user priorities that are setup in the configuration with a traffic class mapping higher than what the hardware supports will be reported as being on TC 0. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: Reduce Rx header size to what is actually usedAlexander Duyck
The recent changes to netdev_alloc_skb actually make it so that the size of the buffer now actually has a more direct input on the truesize. So in order to make best use of the piece of a page we are allocated I am reducing the IXGBE_RX_HDR_SIZE to 256 so that our truesize will be reduced by 256 bytes as well. This should result in performance improvements since the number of uses per page should increase from 4 to 6 in the case of a 4K page. In addition we should see socket performance improvements due to the truesize dropping to less than 1K for buffers less than 256 bytes. Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: Fix handling of FDIR_HASH flagAlexander Duyck
This change makes it so that we can use the atr_sample_rate to determine if we are capable of supporting ATR. The advantage to this approach is that it allows us to now determine the setting of the IXGBE_FLAG_FDIR_HASH_CAPABLE based on the queueing scheme, instead of the queueing scheme being based on the flag. Using this approach there are essentially 5 conditions that must be checked prior to trying to enable ATR: 1. Is SR-IOV disabled? 2. Are the number of TCs <= 1? 3. Is RSS queueing limit greater than 1? 4. Is atr_sample_rate set? 5. Is Flow Director perfect filtering disabled? If any of these conditions are enabled they should disable ATR filtering. Note that in the case of conditions 1 through 4 being met we will set things up for ATR queueing, however if test 5 fails we will still leave the queues allocated for use by perfect filters. The reason for this is to allow for us to switch back and forth between ntuple and ATR without needing to reallocate the descriptor rings. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: Change how we check for pre-existing and assigned VFsAlexander Duyck
This patch does two things. First it drops the unnecessary work of searching for enabled VFs when we first bring up the adapter and instead just uses pci_num_vf to determine how many VFs are enabled on the adapter. The second thing it does is drop the use of vfdev from the vf_data_storage structure. Instead we just search the entire system for a VF that has us as it's PF, and then if that VF is assigned we indicate that the VFs are assigned. This allows us to still check for assigned VFs even if the vfinfo allocation has failed, or vfinfo has been freed. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-21ixgbe: Drop probe_vf and merge functionality into ixgbe_enable_sriovAlexander Duyck
This is meant to fix a bug in which we were not checking for pre-existing VFs if we were not setting the max_vfs value at driver load. What happens now is that we always call ixgbe_enable_sriov and this checks for pre-existing VFs ore requested VFs prior to deciding on no SR-IOV. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-20Merge branch 'master' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next Jerr Kirsher says: ==================== This series contains updates to ixgbe. ... Alexander Duyck (9): ixgbe: Use VMDq offset to indicate the default pool ixgbe: Fix memory leak when SR-IOV VFs are direct assigned ixgbe: Drop references to deprecated pci_ DMA api and instead use dma_ API ixgbe: Cleanup configuration of FCoE registers ixgbe: Merge all FCoE percpu values into a single structure ixgbe: Make FCoE allocation and configuration closer to how rings work ixgbe: Correctly set SAN MAC RAR pool to default pool of PF ixgbe: Only enable anti-spoof on VF pools ixgbe: Enable FCoE FSO and CRC offloads based on CAPABLE instead of ENABLED flag ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-20ixgbe: use PCI_VENDOR_ID_INTELJon Mason
Use PCI_VENDOR_ID_INTEL from pci_ids.h instead of creating its own vendor ID #define. Signed-off-by: Jon Mason <jdmason@kudzu.us> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Cc: Bruce Allan <bruce.w.allan@intel.com> Cc: Carolyn Wyborny <carolyn.wyborny@intel.com> Cc: Don Skidmore <donald.c.skidmore@intel.com> Cc: Greg Rose <gregory.v.rose@intel.com> Cc: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Cc: Alex Duyck <alexander.h.duyck@intel.com> Cc: John Ronciak <john.ronciak@intel.com> Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-19ixgbe: Enable FCoE FSO and CRC offloads based on CAPABLE instead of ENABLED flagAlexander Duyck
Instead of only setting the FCOE segmentation offload and CRC offload flags if we enable FCoE, we could just set them always since there are no modifications needed to the hardware or adapter FCoE structure in order to use these features. The advantage to this is that if FCoE enablement fails, for example because SR-IOV was enabled on 82599, we will still have use of the FCoE segmentation offload and Tx/Rx CRC offloads which should still help to improve the FCoE performance. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Only enable anti-spoof on VF poolsAlexander Duyck
The current logic is enabling anti-spoof on all pools and then clearing anti-spoof on just the first PF pool. The correct approach is to only set anti-spoof on the VF pools and to leave all of the PF pools unchecked. This allows for items such as FCoE to use adjacent pools within the PF for transmit and receive queues without the traffic being blocked by this security feature. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Correctly set SAN MAC RAR pool to default pool of PFAlexander Duyck
This change corrects an issue in which an FCoE enabled adapter was always setting the FCoE SAN MAC MPSAR register to 0x1. This results in the first VF being assigned the SAN MAC address in the case of SR-IOV and as such is incorrect. To resolve this I am adding a new function that will update the SAN MAC pool address after reset. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Make FCoE allocation and configuration closer to how rings workAlexander Duyck
This patch changes the behavior of the FCoE configuration so that it is much closer to how the main body of the ixgbe driver works for ring allocation. The first piece is the ixgbe_fcoe_ddp_enable/disable calls. These allocate the percpu values and if successful set the fcoe_ddp_xid value indicating that we can support DDP. The next piece is the ixgbe_setup/free_ddp_resources calls. These are called on open/close and will allocate and free the DMA pools. Finally ixgbe_configure_fcoe is now just register configuration. It can go through and enable the registers for the FCoE redirection offload, and FIP configuration without any interference from the DDP pool allocation. The net result of all this is two fold. First it adds a certain amount of exception handling. So for example if ixgbe_setup_fcoe_resources fails we will actually generate an error in open and refuse to bring up the interface. Secondly it provides a much more graceful failure case than the previous model which would skip setting up the registers for FCoE on failure to allocate DDP resources leaving no Rx functionality enabled instead of just disabling DDP. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Merge all FCoE percpu values into a single structureAlexander Duyck
This change merges the 2 statistics values for noddp and noddp_ext_buff and the dma_pool into a single structure that can be allocated per CPU. The advantages to this are several fold. First we only need to do one alloc_percpu call now instead of 3, so that means less overhead for handling memory allocation failures. Secondly in the case of ixgbe_fcoe_ddp_setup we only need to call get_cpu once which makes things a bit cleaner since we can drop a put_cpu() from the exception path. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Cleanup configuration of FCoE registersAlexander Duyck
This change makes it so we always use the FCoE redirection table. We just set all 8 entries to the same value in the case of only having one queue for FCoE. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Drop references to deprecated pci_ DMA api and instead use dma_ APIAlexander Duyck
The networking side of the code had already been updated to use dma_ calls instead of the old pci_ calls. However it looks like the FCoE code was never updated. This change goes through and moves everything from the pci APIs to the dma APIs. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Fix memory leak when SR-IOV VFs are direct assignedAlexander Duyck
The VF driver had a memory leak that would occur if VFs were assigned to a guest. The amount of leak would vary with the number of VFs but could max out at about 14K per PF. To reproduce the leak all you would need to do is enable all the VFs on the first PF. Then start a loop of loading and unloading the driver with max_vfs=63 for the first port. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-19ixgbe: Use VMDq offset to indicate the default poolAlexander Duyck
This change makes it so that we can use the VMDq ring feature offset value to determine the default pool instead of using num_vfs. The reason for this change is to avoid issues should we fail to allocate vfinfo but have pre-existing VFs. What should happen in this case is that num_vfs will go to 0, but the VMDq offset will contain the location of the first PF pool. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <Sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-18ixgbe: Cleanup holes in flags after removing several of themAlexander Duyck
This change is just meant to defragment the flags as there are several hole that have been introduced since several features, or the flags for them, have been removed. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-18ixgbe: Retire RSS enabled and capable flagsAlexander Duyck
All of our hardware supports RSS even if it is only for a single queue. So instead of toting around the RSS enable flag I am updating the code so that all devices are enabled and if we want to disable RSS it is indicated via the RSS mask. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-18ixgbe: Add support for SR-IOV w/ DCB or RSSAlexander Duyck
This change essentially makes it so that we can enable almost all of the features all at once. This patch allows for the combination of SR-IOV, DCB, and FCoE in the case of the x540. It also beefs up the SR-IOV by adding support for RSS to the PF. The testing matrix gets to be very complex for this patch as there are a number of different features and subsets for queueing options. I tried to narrow these down a bit by restricting the PF to only supporting 4TC DCB when it is enabled in addition to SR-IOV. Cc: Greg Rose <gregory.v.rose@intel.com> Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-18ixgbe: Update configure virtualization to allow for multiple PF poolsAlexander Duyck
This change allows all pools from the default pool forward to be enabled vi ixgbe_configure_virtualization. This is needed as we are planning to use queues belonging to adjacent pools for FCoE when SR-IOV and FCoE are both enabled. In addition this patch contains some minor formatting changes as there were a few spots that seemed to be in need of some cleanup. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-17ixgbe: Cleanup logic for MRQC and MTQC configurationAlexander Duyck
This change is meant to make the code much more readable for MTQC and MRQC configuration. The big change is that I simplified much of the logic so that we are essentially handling just 4 cases and their variants. In the cases where RSS is disabled we are actually just programming the RETA table with all 1s resulting in a single queue RSS. In the case of SR-IOV I am treating that as a subset of VMDq. This all results int he following configuration for the hardware: DCB En Dis VMDq En VMDQ/DCB VMDq/RSS Dis DCB/RSS RSS Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-17ixgbe: Update the logic for ixgbe_cache_ring_dcb and DCB RSS configurationAlexander Duyck
This change cleans up some of the logic in an attempt to try and simplify things for how we are configuring DCB w/ RSS. In this patch I basically did 3 things. I updated the logic for getting the first register index. I applied the fact that all TCs get the same number of queues to simplify the looping logic in caching the DCB ring register. Finally I updated how we configure the RQTC register to match the fact that all TCs are assigned the same number of queues. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-17ixgbe: Move configuration of set_real_num_rx/tx_queues into openAlexander Duyck
It makes much more sense for us to configure the real number of Tx and Rx queues in the ixgbe_open call than it does in ixgbe_set_num_queues. By setting the number in ixgbe_open we can avoid a number of unecessary updates and only have to make the calls once. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-17ixgbe: Handle failures in the ixgbe_setup_rx/tx_resources callsAlexander Duyck
Previously we were exiting without cleaning up the memory internally on the ixgbe_setup_rx_resources and ixgbe_setup_tx_resources calls. Instead of forcing the caller to clean things up for us we should instead just unwind the rings and free the memory as we go. This way we can more gracefully clean up the rings in the event of an allocation failure. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-17ixgbe: Ping the VFs on link status change to trigger link changeAlexander Duyck
When the link status changes on the PF we need to notify the VFs. In order to do this we should ping all of the VFs in order to trigger a link status change on them as well. This fixes issues in which the PF would reset, but the VF didn't because the NAK flag was not set in the VF mailbox. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-16Merge branch 'master' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next Jett Kirsher says: ==================== This series contains updates to e1000e and ixgbe. ... Alexander Duyck (5): ixgbe: Simplify logic for getting traffic class from user priority ixgbe: Cleanup unpacking code for DCB ixgbe: Populate the prio_tc_map in ixgbe_setup_tc ixgbe: Add function for obtaining FCoE TC based on FCoE user priority ixgbe: Merge FCoE set_num and cache_ring calls into RSS/DCB config ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-16ethernet: Use eth_random_addrJoe Perches
Convert the existing uses of random_ether_addr to the new eth_random_addr. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>