mirror of
git://nv-tegra.nvidia.com/linux-nv-oot.git
synced 2025-12-22 09:11:26 +03:00
d01b6a3a2b748fd01b25d6295a23d49dc669f1be
Existing implementation uses a single page_pool for all Rx DMA channels. As by default all irqs are on CPU 0, this does not cause any issue. Routing an irq to some other CPU causes race conditon for page allocation from page_pool which leads to random memory corruption. [ 158.416637] Call trace: [ 158.416644] arm_lpae_map_pages+0xb4/0x1e0 [ 158.416649] arm_smmu_map_pages+0x8c/0x160 [ 158.416661] __iommu_map+0xf8/0x2b0 [ 158.416677] iommu_map_atomic+0x58/0xb0 [ 158.416683] __iommu_dma_map+0xac/0x150 [ 158.416687] iommu_dma_map_page+0xf4/0x220 [ 158.416690] dma_map_page_attrs+0x1e8/0x260 [ 158.416727] page_pool_dma_map+0x48/0xd0 [ 158.416750] __page_pool_alloc_pages_slow+0xc4/0x390 [ 158.416757] page_pool_alloc_pages+0x64/0x90 [ 158.416762] ether_padctrl_mii_rx_pins+0x164/0x9d0 [nvethernet] [ 158.416807] ether_padctrl_mii_rx_pins+0x478/0x9d0 [nvethernet] [ 158.416822] osi_process_rx_completions+0x284/0x4d0 [nvethernet] [ 158.416832] 0xffffa26218b8f71c [ 158.416855] __napi_poll+0x48/0x230 [ 158.416871] net_rx_action+0xf4/0x290 [ 158.416875] __do_softirq+0x130/0x3e8 [ 158.416889] __irq_exit_rcu+0xe8/0x110 This change creates a page_pool per Rx DMA channel. This ensures there is no race conditon for page alloc/dealloc from each DMA napi context. Bug 4541158 Change-Id: I12668ee7d824fd30d54a874bbbdf190d02943478 Signed-off-by: Aniruddha Paul <anpaul@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3093534 GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Tested-by: Jon Hunter <jonathanh@nvidia.com>
Description
No description provided