Fix a race condition where we'd still be booting up the gpu and/or
initializing the driver but elsewhere assume that all is done already.
Some userspace APIs to make sure that we're ready by testing
g->gr.sw_ready, but this flag is set in the middle of bootup; there are
other things after gr initialization. Add a new flag that is enabled
after bootup is fully complete at the end of finalize_poweron, and
change the checks in user API paths to test the new flag only.
These checks are only in the ioctl paths for ctrl, dbg and tsg, and in
the ctrl device's opening path.
The gr.sw_ready flag is still left there to signify whether just gr has
had its bookkeeping initialized.
Bug 200370011
Change-Id: I2995500e06de46430d9b835de1e9d60b3f01744e
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640124
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Syncpoints can longer be disabled/enabled during run time as
NVGPU_HAS_SYNCPOINTS flag is set based on has_syncpoints
value in platform data during probe. Based on this, either
of syncpoint or semaphore pool is initialized.
Bug 2040115
Change-Id: Ib256e1a6ec8b1584799adb6f183fd567aebfaf13
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640380
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Enable devfreq for gv11b by enabling ""nvhost_podgov"
governor in platform data.
Reuse scaling functions from gp10b/gk20a.
Remove emc floor on railgate for power saving and make
max emc frequency as floor in rail-ungate for faster gpu boot.
Bug 2039013
Bug 200377508
Change-Id: I65ee7735202e3decbe3451157f7fc1f1f273c3ff
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1639752
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add cleanup to the gk20a_probe() function since it will often fail
due to probe deferal. These "failures" cause this function to be
called multiple times and potentially allocate many resources over
and over again, leaking the old allocations.
Bug 200369627
Change-Id: Ic0bba0ae6542485135d9cb7393086e4460cd271d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640628
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
On gv11b we can have multiple SMs per TPC. Add sm_per_tpc in
vgpu constants to properly dimension the virtual SM to TPC/GPC
mapping in virtualization case.
Use TEGRA_VGPU_CMD_GET_SMS_MAPPING to query current mapping.
Bug 2039676
Change-Id: I817be18f9a28cfb9bd8af207d7d6341a2ec3994b
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1631203
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
During bringup and before nvlink is up GV100 on the DDPX platform operates
with a very, very slow sysmem link. In order to get sysmem test to pass
it is neccesary to significantly increase most timeouts by an order the
magnitude.
Bug 2040544
Change-Id: I26858afde4ae80c70f86b47cfff674b6b00b5bf8
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1627417
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With a recent rework we moved gk20a_get() call to nvgpu_ioctl_tsg_open(),
but corresponding gk20a_put() call remained in gk20a_tsg_release()
So if a TSG is opened and released from within kernel with APIs
gk20a_tsg_open()/gk20a_tsg_release() we mistakenly drop extra refcount
through gk20a_put()
Fix this by moving gk20a_put() call to nvgpu_ioctl_tsg_release() which
balances gk20a_get() call in nvgpu_ioctl_tsg_open()
Bug 200374011
Change-Id: Id0cec0426e6231309dc530ab5c934dacaba9f8da
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1630969
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a_cde_load(), if TSG allocation fails we bail out the function without
setting the error code and caller of this functions assumes CDE load is
successful
Fix this by setting explicit error code if TSG allocation fails
Bug 200374011
Change-Id: I6e7bcb325fb0062605fa2f696da4abdeb34e241a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1627117
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a_cde_remove_ctx(), we unbind the channel from TSG and
close the channel. But we do not drop the TSG refcount leaking
the TSG reference
After allocating sufficient contexts, we see TSG creation fails as below
nvgpu: 17000000.gp10b: gk20a_cde_load:1286 [ERR] cde: could not create TSG
Fix this by explicitly dropping TSG refcount
Also, do not explicitly unbind the channel from TSG
gk20a_channel_close() will internally unbind the channel from TSG
Bug 200374011
Change-Id: If6d75b20d5e03d710c0597d7a320d1157206a2a5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1627116
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove nvgpu internal flag indicating that TSGs are required. We now
require TSGs always. This also fixes a regression where CE channels
were back to using bare channels on gp106.
Bug 1842197
Change-Id: Id359e5a455fb324278636bb8994b583936490ffd
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1628481
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This CL handles
- erroneous use of boot_0 function pointer
before being assigned in __nvgpu_check_gpu_state
- And proper handling of error returned from gk20a_readl
in gk20a_mc_boot_0
With these fixes crash is not seen in case mc_boot_0 read
returns 0 in gk20a_mc_boot_0
- And also this handles the recursion caused by mc.boot_0()
calling nvgpu_readl and nvgpu_readl in turn
calling mc.boot_0 in case of read failure
Bug 2010966
Change-Id: Ia087811c67d88948b7fc5fff35e0fabc6ea91989
Signed-off-by: Supriya <ssharatkumar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1616274
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
All channels should be wrapped in TSGs so that bare channel support
can be dropped. Bind all CE channels to TSGs.
Bug 1842197
Change-Id: Ia55748d5b53750d860f7764b532ef9eeb6f214b8
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1616693
nvgpu_get_nvhost_dev will not return error if host1x field within
gv11b device tree is not present. It will just set has_syncpoints
in gk20a struct to false. syncpt_unit_interface* should be called
only if g->has_syncpoints is set to true.
Change-Id: Id1eb94aba4cff1942ad519f528ebdb8291963971
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1615973
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GR IDLE timeout is defined as Kconfig. Instead of that introduce a
new header file defaults.h which encapsulates any generic defaults
we use in nvgpu, and move the definition there.
Change-Id: I78ff1d2790d7ee3dff6df42bbd11cf683a85bf79
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1612650
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Define __nvgpu_mem_create_from_phys only in systems with nvhost
enabled. The calling code is also built only when nvhost is enabled.
phys_to_page() also exists only in arm64, so using it in non-arm64
platform causes a build failure.
Change-Id: Iee023b55bba863d46079796e1c49c19456c1d229
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1607581
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>