Move linux dependencies and CONFIG_DEBUG_FS to linux specific
code from common driver for gp106 clk debugfs. There is no
code change in functions moved from gp106/clk_gp106.c.
It uses nvgpu_os_linux_ops to add gp106 specific clk debugfs
ops. The linux specific part of nvgpu driver uses this op
to initialize gp106 clk debugfs.
As gv100 also uses gp106 clk debugfs ops, set up os ops for
gv100.
JIRA NVGPU-603
Change-Id: Ib55ef051b13366e5907e1d05376bb18bf42c8653
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1797904
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
g->clk_arb is currently initialized as a part of gk20a_finalize_poweron().
Any subsequent call to gk20a_finalize_poweron reinitializes the clk_arb
and leading to memory leaks. This is resolved by protecting the
g->clk_arb initialization with a mutex clk_arb_enable_lock in struct
gk20a. We skip initializing the g->clk_arb if its not NULL.
Bug 2061372
Change-Id: I59158e0a5e4c827fdbd6d9ea2d04c78d0986347a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1811650
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently only cde uses nvgpu_os_linux_ops to set up linux
specific ops. Move nvgpu_os_linux_ops of a gpu to a common
file so that those can be reused for other os ops of that
gpu.
JIRA NVGPU-603
Change-Id: Icf1ff275d3832229137f730fe8183b8015e82673
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1797902
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This flag - has_physical_mode - doesn't seem to do much other than
force the PTE/PDE and inst block addresses to be physical instead
of potentially IOMMUed.
There is a reason to do this on volta (nvlink not being IOMMU'able
being the primary reason) but this flag is too general it seems.
The flag was being enabled on all native platforms. The problem is
that some page tables (the maxwell small page directories) could
be larger than 4KB which meant that the allocation used for them
could be potentially discontiguous. Discontiguous page directories
obviously is incorrect.
This patch deletes the has_physical_mode flag and instead replaces
the places where it's checked with a check for nvlink being
enabled. Since we _do_ want to program phyiscal PDEs and PTEs for
NVLINK devices (regardless of IOMMU status they always access
memory by physical address) we need a check for NVLINK state.
Bug 200414723
Change-Id: I09ad86b12d8aabcf9648a22503f4747fd63514dd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1792163
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
clk_arb.h and gk20a.h has circular dependencies to each other. This is
removed by forward declaring struct gk20a in clk_arb.h and removing the
header gk20a.h from clk_arb.h and similarly forward declaring struct
nvgpu_clk_arb in gk20a.h and removing the header clk_arb.h from gk20a.h
alongwith putting headers in every execution unit which calls clk_arb.h
related methods.
JIRA NVGPU-597
Change-Id: I7cedca17206c148b21d93e5d7f0d88c2f98b979a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1790915
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Runtime PM is enabled only for iGPU and not for dGPU. For dGPU,
the .probe() of driver pm_runtime_disable()s, if rail-gating is
not enabled. With nvgpu kernel module load/unload, .probe() is
called multiple times for same struct device *. This results
in an overflow of disable_depth (3 bit refcount) and enables
runtime PM on 8th iteration and calls RTPM routines even if it's
disabled.
To effectively manage pm_runtime_disable(), move it from common
nvgpu_remove() to iGPU/dGPU specific routines.
Also, add restore pm_runtime state of device on driver .remove().
Bug 1987855
Change-Id: I781278da546ef9c9ef7d7da7dbea0757df32716f
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770804
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
On nvgpu module unload, platform_driver_unregister() detaches
driver from device (driver_detach()). As part of this,
__device_release_driver() results a race between driver's
.runtime_resume(), .remove() and .runtime_suspend().
As nvgpu's .remove() is handling all steps of cleaning up
driver state and shutting down gpu, .runtime_suspend()
shall have no work. So skip .runtime_suspend() is gk20a *g
has already been processed.
Bug 1987855
Change-Id: I024ac63d321689ea04c64b1ffc125da943d482f9
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770803
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
While removing nvgpu driver, devm mapped reg mappings
are released on driver_unregister. For iGPU, these
regs are explicitly unmapped with iounmap(). This
results in "Trying to vfree() nonexistent vm area"
warnings on driver removal.
Address this by using devm* variants to map all IO regions
of both iGPU and dGPU and let the driver unregister
release these mappings.
Also, lock out GPU regs in driver removal path.
Bug 1987855
Change-Id: I0388daf90bea3eaf8752255059cfd3ceabf66e7d
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730539
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>