The eIPoIB local Ethernet MAC is currently constructed from the port
GUID. Given a base GUID/MAC value of N, Mellanox seems to populate:
Node GUID: N + 0
Port 1 GUID: N + 1
Port 2 GUID: N + 2
and
Port 1 MAC: N + 0
Port 2 MAC: N + 1
This causes a duplicate local MAC address when port 1 is configured as
Infiniband and port 2 as Ethernet, since both will derive their MAC
address as (N + 1).
Fix by using the port's Ethernet MAC as the eIPoIB local EMAC. This
is a behavioural change that could potentially break configurations
that rely on the local EMAC value, such as a DHCP server relying on
the chaddr field for DHCP reservations.
Signed-off-by: Christian Iversen <ci@iversenit.dk>
Modified-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Some older versions of the hardware (and/or firmware) do not report an
event when an Infiniband link reaches the INIT state. The driver
works around this missing event by calling ib_smc_update() on each
event queue poll while the link is in the DOWN state. This results in
a very large number of commands being issued while any open Infiniband
link is in the DOWN state (e.g. unplugged), to the point that the 1ms
delay from waiting for each command to complete will noticeably affect
responsiveness.
Fix by decreasing the command completion polling delay from 1ms to
10us.
Signed-off-by: Christian Iversen <ci@iversenit.dk>
Modified-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Add hermon_dump_eqctx() for dumping the event queue context and
hermon_dump_eqes() for dumping any unconsumed event queue entries.
Originally-implemented-by: Christian Iversen <ci@iversenit.dk>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Some commands (particularly in relation to device initialization) can
occasionally take longer than 2 seconds, and the Mellanox documentation
recommends a 10 second timeout.
Signed-off-by: Christian Iversen <ci@iversenit.dk>
Require drivers to report the total number of Infiniband ports. This
is necessary to report the correct number of ports on devices with
dynamic port types.
For example, dual-port Mellanox cards configured for (eth, ib) would
be rejected by the subnet manager, because they report using "port 2,
out of 1".
Signed-off-by: Christian Iversen <ci@iversenit.dk>
Some versions of gcc (observed with gcc 9.3.0 on NixOS Linux) produce
a spurious warning about an out-of-bounds array access for the
isa_extra_probe_addrs[] array.
Work around this compiler bug by redefining the array index as a
signed long, which seems to somehow avoid this spurious warning.
Debugged-by: Manuel Mendez <mmendez534@gmail.com>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Ensure that any command failure messages are followed up with an error
message indicating what the failed command was attempting to perform.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The xHCI driver can handle only a single command TRB in progress at
any one time. Immediately fail any attempts to issue concurrent
commands (which should not occur in normal operation).
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Email from solarflare.com will stop working, so update those. Remove
email for Shradha Shah, as she is not involved with this any more.
Update copyright notices for files touched.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
USB tethering via an iPhone is unreasonably complicated due to the
requirement to perform a pairing operation that involves establishing
a TLS session over a completely unrelated USB function that speaks a
protocol that is almost, but not quite, entirely unlike TCP.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Include a potential DMA mapping within the definition of an I/O
buffer, and move all I/O buffer DMA mapping functions from dma.h to
iobuf.h. This avoids the need for drivers to maintain a separate list
of DMA mappings for each I/O buffer that they may handle.
Network device drivers typically do not keep track of transmit I/O
buffers, since the network device core already maintains a transmit
queue. Drivers will typically call netdev_tx_complete_next() to
complete a transmission without first obtaining the relevant I/O
buffer pointer (and will rely on the network device core automatically
cancelling any pending transmissions when the device is closed).
To allow this driver design approach to be retained, update the
netdev_tx_complete() family of functions to automatically perform the
DMA unmapping operation if required. For symmetry, also update the
netdev_rx() family of functions to behave the same way.
As a further convenience for drivers, allow the network device core to
automatically perform DMA mapping on the transmit datapath before
calling the driver's transmit() method. This avoids the need to
introduce a mapping error handling code path into the typically
error-free transmit methods.
With these changes, the modifications required to update a typical
network device driver to use the new DMA API are fairly minimal:
- Allocate and free descriptor rings and similar coherent structures
using dma_alloc()/dma_free() rather than malloc_phys()/free_phys()
- Allocate and free receive buffers using alloc_rx_iob()/free_rx_iob()
rather than alloc_iob()/free_iob()
- Calculate DMA addresses using dma() or iob_dma() rather than
virt_to_bus()
- Set a 64-bit DMA mask if needed using dma_set_mask_64bit() and
thereafter eliminate checks on DMA address ranges
- Either record the DMA device in netdev->dma, or call iob_map_tx() as
part of the transmit() method
- Ensure that debug messages use virt_to_phys() when displaying
"hardware" addresses
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Redefine the value stored within a DMA mapping to be the offset
between physical addresses and DMA addresses within the mapped region.
Provide a dma() wrapper function to calculate the DMA address for any
pointer within a mapped region, thereby simplifying the use cases when
a device needs to be given addresses other than the region start
address.
On a platform using the "flat" DMA implementation the DMA offset for
any mapped region is always zero, with the result that dma_map() can
be optimised away completely and dma() reduces to a straightforward
call to virt_to_phys().
Signed-off-by: Michael Brown <mcb30@ipxe.org>
For the physical function driver, the transmit queue needs to be
configured to be associated with the relevant physical function
number. This is currently obtained from the bus:dev.fn address of the
underlying PCI device.
In the case of a virtual machine using the physical function via PCI
passthrough, the PCI bus:dev.fn address within the virtual machine is
unrelated to the real physical function number. Such a function will
typically be presented to the virtual machine as a single-function
device. The function number extracted from the PCI bus:dev.fn address
will therefore always be zero.
Fix by reading from the Function Requester ID Information Register,
which always returns the real PCI bus:dev.fn address as used by the
physical host.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The datasheet is fairly incomprehensible in terms of identifying the
appropriate MAC address for use by the physical function driver.
Choose to read the MAC address from PRTPM_SAH and PRTPM_SAL, which at
least matches the MAC address as selected by the Linux i40e driver.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Physical addresses in debug messages are more meaningful from an
end-user perspective than potentially IOMMU-mapped I/O virtual
addresses, and have the advantage of being calculable without access
to the original DMA mapping entry (e.g. when displaying an address for
a single failed completion within a descriptor ring).
Signed-off-by: Michael Brown <mcb30@ipxe.org>
For a software UNDI, the addresses in PXE_CPB_TRANSMIT.FrameAddr and
PXE_CPB_RECEIVE.BufferAddr are host addresses, not bus addresses.
Remove the spurious (and no-op) use of virt_to_bus() and replace with
a cast via intptr_t.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The UEFI specification defines PXE_CPB_TRANSMIT.DataLen as excluding
the length of the media header. iPXE currently fills in DataLen as
the whole frame length (including the media header), along with
placing the media header length separately in MediaheaderLen. On some
UNDI implementations (observed using a VMware ESXi 7.0b virtual
machine), this causes transmitted packets to include 14 bytes of
trailing garbage.
Match the behaviour of the EDK2 SnpDxe driver, which fills in DataLen
as the whole frame length (including the media header) and leaves
MediaheaderLen as zero. This behaviour also violates the UEFI
specification, but is likely to work in practice since EDK2 is the
reference implementation.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The malloc_dma() function allocates memory with specified physical
alignment, and is typically (though not exclusively) used to allocate
memory for DMA.
Rename to malloc_phys() to more closely match the functionality, and
to create name space for functions that specifically allocate and map
DMA-capable buffers.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The legacy transmit descriptor index is not reset by anything short of
a full device reset. This can cause the legacy transmit ring to stall
after closing and reopening the device, since the hardware and
software indices will be out of sync.
Fix by performing a reset after closing the interface. Do this only
if operating in legacy mode, since in C+ mode the reset is not
required and would undesirably clear additional state (such as the C+
command register itself).
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Commit 87e39a9c9 ("[efi] Split efi_usb_path() out to a separate
function") unintentionally introduced an undefined symbol reference
from efi_path.o to usb_depth(), causing the USB subsystem to become a
dependency of all EFI builds.
Fix by converting usb_depth() to a static inline function.
Reported-by: Pico Mitchell <pico@randomapplications.com>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Some UEFI BIOSes (observed with at least the Insyde UEFI BIOS on a
Microsoft Surface Go) provide a very broken version of the
UsbMassStorageDxe driver that is incapable of binding to the standard
EFI_USB_IO_PROTOCOL instances and instead relies on an undocumented
proprietary protocol (with GUID c965c76a-d71e-4e66-ab06-c6230d528425)
installed by the platform's custom version of UsbCoreDxe.
The upshot is that USB mass storage devices become inaccessible once
iPXE's native USB host controller drivers are loaded.
One possible workaround is to load a known working version of
UsbMassStorageDxe (e.g. from the EDK2 tree): this driver will
correctly bind to the standard EFI_USB_IO_PROTOCOL instances exposed
by iPXE. This workaround is ugly in practice, since it involves
embedding UsbMassStorageDxe.efi into the iPXE binary and including an
embedded script to perform the required "chain UsbMassStorageDxe.efi".
Provide a native USB mass storage driver for iPXE, allowing USB mass
storage devices to be exposed as iPXE SAN devices.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
For USB mass storage devices, we do not want to submit more bulk IN
packets than are required for the inbound data, since this will waste
memory.
Allow an upper limit to be specified on each refill attempt. The
endpoint will be refilled to the lower of this limit or the limit
specified by usb_refill_init().
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Closing and reopening a USB endpoint will clear any halt status
recorded by the host controller, but may leave the endpoint halted at
the device. This will cause the first packet submitted to the
reopened endpoint to be lost, before the automatic stall recovery
mechanism detects the halt and resets the endpoint.
This is relatively harmless for USB network or HID devices, since the
wire protocols will recover gracefully from dropped packets. Some
protocols (e.g. for USB mass storage devices) assume zero packet loss
and so would be adversely affected.
Fix by allowing any device endpoint halt status to be cleared on a
freshly opened endpoint.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
A zero divisor will currently lead to a 16-bit integer overflow when
calculating the transmit padding, and a potential division by zero if
assertions are enabled.
Avoid these problems by treating a divisor value of zero as equivalent
to a divisor value of one (i.e. no alignment requirements).
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The current error handling mechanism defers the endpoint reset until
the next use of the endpoint, on the basis that errors are detected
during completions and completion handling should not recursively call
usb_poll().
In the case of usb_control(), we are already at the level that calls
usb_poll() and can therefore safely perform the endpoint reset
immediately. This has no impact on functionality, but does make
debugging traces easier to read since the reset will appear
immediately after the causative error.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
We currently use a heuristic to determine whether or not to request
cable detection in PXE_OPCODE_INITIALIZE, based on the need to work
around a known Emulex driver bug (see commit c0b61ba "[efi] Work
around bugs in Emulex NII driver") and the need to accommodate links
that are legitimately slow to come up (see commit 6324227 "[efi] Skip
cable detection at initialisation where possible").
This heuristic appears to fail with newer Emulex drivers. Attempt to
support all known drivers (past and present) by first attempting
initialisation with cable detection, then falling back to attempting
initialisation without cable detection.
Reported-by: Kwang Woo Lee <kwleeyh@gmail.com>
Tested-by: Kwang Woo Lee <kwleeyh@gmail.com>
Signed-off-by: Michael Brown <mcb30@ipxe.org>