- Add update_patch_count parameter to ctx_patch_write_begin/end functions
If True, the main_image_patch_count register will be updated. Previously,
the patch count would be updated if the cpu_va for the graphics context
was non-NULL, but this only works for sysmem (cpu_va is always 0 for vidmem)
- Remove unused patch parameter for the commit_global_timeslice functions
JIRA ESRM-74
Bug 2012077
Change-Id: I35d0a9eb48669a227833bba1d2e63e9fe8fd8aa9
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594790
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GPU class ids were moved to get_litter_value API, but vgpu was not
updated to remove assigning them in HAL initialization. Remove the
duplicate assignments.
JIRA NVGPU-388
Change-Id: I65cf8f9cfcfc372c1c3b0d9239e55f19c9a02f46
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596247
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a hard coded #define for map_buffer_batch_limit and use that
insted of querying from GPU characteristics. Also add an
nvgpu_is_enabled() flag for disabling batch mapping, and set
map_buffer_batch_limit to zero if batch mapping is disabled.
JIRA NVGPU-388
Change-Id: Ic91feea638d0f47c5c22321886cfc75e97259dc3
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593690
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
max_css_buffer_size was accessed directly from GPU characteristics,
which added a dependency to Linux. Move the field to gr_gk20a and
copy it to GPU characteristics at query time.
JIRA NVGPU-259
Change-Id: Ied19e33bf1a79a9ce45e33df57fe5bbe3a3c4f9d
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593689
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Final VM mapping refactoring. Move most of the logic in the VM
map path to the common/mm/vm.c code and use the generic APIs
previously implemented to deal with comptags and map caching.
This also updates the mapped_buffer struct to finally be free
of the Linux dma_buf and scatter gather table pointers. This
is replaced with the nvgpu_os_buffer struct.
JIRA NVGPU-30
JIRA NVGPU-71
JIRA NVGPU-224
Change-Id: If5b32886221c3e5af2f3d7ddd4fa51dd487bb981
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583987
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a generic nvgpu_os_buffer type, defined by each OS, to abstract
a "user" buffer. This allows the comptag interface to be used in the
core code.
The end goal of this patch is to allow the OS specific mapping code
to call a generic mapping function that handles most of the mapping
logic. The problem is a lot of the logic involves comptags which are
highly dependent on the operating systems buffer management scheme.
With this, each OS can implement the buffer comptag mechanics
however it wishes without the core MM code caring.
JIRA NVGPU-30
JIRA NVGPU-223
Change-Id: Iaf64bc52e01ef3f262b4f8f9173a84384db7dc3e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583986
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Drastically simplify and move the aligment computation for buffers
getting mapped into the SGT code. An SGT is all that is needed for
computing the alignment.
However, this did require that a new SGT op was added:
nvgpu_sgt_iommuable()
This function returns true if the passed SGT is IOMMU'able and must
be implemented by an SGT implementation that has IOMMU'able buffers.
If this function is left as NULL then it is assumed that the buffer
is not IOMMU'able.
Also cleanup the parameter ordering convention among all nvgpu_sgt
functions. Previously there was a mishmash of different parameter
orderings. This patch now standardizes on the gk20a first approach
seen everywhere else in the driver.
JIRA NVGPU-30
JIRA NVGPU-246
JIRA NVGPU-71
Change-Id: Ic4ab7b752847cf795c7cfafed5a07818217bba86
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583985
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GPU hardware block needs tegra fuse clock to mirror
gpu fuses from tegra fuses to gpu domain.
Tegra fuse driver provided following APIs to
enable/disable tegra fuse clock:
int tegra_fuse_clock_enable(void);
int tegra_fuse_clock_disable(void);
To ensure that tegra fuse clock is disabled by nvgpu
driver when gpu hardware block is not in use by:
Calling tegra_fuse_clock_enable() while doing
gk20a_pm_unrailgate() and calling
tegra_fuse_clock_disable() while doing
gk20a_pm_railgate().
Bug 2019897
Change-Id: I61688829fd9a8b0c1ffa9d34db6393550f333866
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595297
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gp10b_alloc_gr_ctx(), we use linux specific flags NVGPU_ALLOC_OBJ_FLAGS_*
Since common code should be independent of linux specific code, define new flags
NVGPU_OBJ_CTX_FLAGS_SUPPORT_* in common code and use them wherever needed
Linux code will parse the user flags and send appropriate flags to
g->ops.gr.alloc_obj_ctx()
Also remove use of NVGPU_ALLOC_OBJ_FLAGS_LOCKBOOST_ZERO since this seems to be
deadcode anyways
Jira NVGPU-382
Change-Id: Id82efe0d46ddc3e2c063610025ea57f283bc3510
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594452
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a_channel_alloc_gpfifo(), we use linux specific flags
NVGPU_ALLOC_GPFIFO_EX_FLAGS_*
Since common code should be independent of linux specific code, define new flags
NVGPU_GPFIFO_FLAGS_SUPPORT_* in common code and use them in
gk20a_channel_alloc_gpfifo()
Linux code will parse the user flags and send appropriate flags to
gk20a_channel_alloc_gpfifo()
Jira NVGPU-381
Change-Id: Ibec51903b3407175fbba727208483b0dc36a5772
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594422
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory,
kernel does not need to know the details about the PTE kinds
anymore. Thus, we can remove the kind_gk20a.h header and the code
related to kind table setup, as well as simplify buffer mapping code
a bit.
Bug 1902982
Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560933
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove gv11b_init_uncompressed_kind_map(), gv11b_init_kind_attr(), and
the related kind setup code. They are not needed anymore.
While we're doing these changes, remove a redundant assignment of
g->bootstrap_owner in hal_gv100.c.
Bug 1902982
Change-Id: Ib40d8f55cfbfa34143a3765c2b4913926ca021fd
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560931
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
After unbinding channel, following fields in
channel status needs to be cleared manually:
ccsr_channel_enable_clr_true
ccsr_channel_pbdma_faulted_reset
ccsr_channel_eng_faulted_reset
Unbinding channel expected to clear all other
channel status fields.
Bug 1972365
Change-Id: Ibfd84df2f41adc2eb437a026acde3f3d618d7758
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594671
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move fuse override DT handling to Linux code. All the chip specific
fuse override functions did the same thing, so delete the HAL and
call the same function to read the DT overrides on all chips.
Also remove the fuse override functionality from dGPU. There are no
DT entries for PCIe devices, so it would've failed anyway.
JIRA NVGPU-259
Change-Id: Ic672e25090cdfc207d9771ab61b6cf53185113a4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593693
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move fuse override DT handling to Linux code. All the chip specific
fuse override functions did the same thing, so delete the HAL and
call the same function to read the DT overrides on all chips.
Also remove the fuse override functionality from dGPU. There are no
DT entries for PCIe devices, so it would've failed anyway.
JIRA NVGPU-259
Change-Id: Iba64a5d53bf4eb94198c0408a462620efc2ddde4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593687
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Termination WPR header is inserted in the non-wpr
blob so that HS knows when to stop processing WPR
headers.
nvgpu_mem_wr32 is copying the terminating WPR header
@ wrong offset in non-wpr blob.
This caused overwriting of the LS signatures present
in the non wpr region, thus leading to LS authentication
failure for GPCCS falcon.
Fix added for t210/t186 as well.
Bug 200362639
Change-Id: I60088b2dd2304fb5de0402b28822b305b34394c2
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594862
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix characteristics for cyclestats:
- SUPPORT_TSG and SUPPORT_CYCLE_STATS_SNAPSHOT were assigned the same value
- For vgpu, SUPPORT_CYCLE_STATS was set redundantly (but differently)
- For vgpu, if the css buffer size is 0, set the support flag to False
JIRA ESRM-88
Bug 200296210
Change-Id: Iaf98dafec55f171b5968c2a8248290284bf30922
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593939
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Modify the LTC code to only use a contiguous CompBit Cache (CBC). The
original code had two allocation schemes: "physical" and "virtual" -
what they meant was virtually contiguous or physically contiguous. The
CBC must appear contiguous to the GPU be it either from the IOMMU or
from physical pages allocated contiguously.
This change makes the CBC get allocated with the FORCE_CONTIGUOUS flag
if the GPU is not IOMMU'able. If we can get contiguous mem with the
IOMMU then no need to force the underlying pages to be contiguous.
However, not all GPUs may be IOMMU'able so we do need to handle that
case.
Also delete the gk20a/ltc_gk20a.[ch] code. All that remained in these
files was the CBC alloc functions which were completely chip agnostic.
As a result these functions were consolidated and moved to common/ltc.c.
Bug 2015747
Change-Id: I3f41961b4f94378b954e7502a6b27cf0bc627375
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593666
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We set the regops limit in common code to a hard coded value and access
it in Linux code. Change the responsibility so that regops limit is
set in Linux code in the GPU characteristics query to a hard coded value
and just use the same hard coded value in the IOCTL limit check.
JIRA NVGPU-259
Change-Id: I2f78a7ea8f1cb68a08633a2dc74b71b3b001e5c9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593682
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Request explicitly contiguous DMA memory for large page directory
allocations. Large in this case means greater than PAGE_SIZE. This
is necessary if the GPU's DMA allocator is set to, by default,
allocate discontiguous memory.
Bug 2015747
Change-Id: I3afe9c2990522058f6aa45f28030bc82a369ca69
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593093
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
All the runlist levels NVGPU_RUNLIST_INTERLEAVE_LEVEL_* are declared in linux
specific uapi header and used in common code
But since common code should be linux-independent, move these uses out of
common code
Define new runlist levels NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* in common code
and use them wherever required
Add new API nvgpu_get_common_runlist_level() to get common runlist level of
the form NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* from linux specific runlist
level of the form NVGPU_RUNLIST_INTERLEAVE_LEVEL_*
Jira NVGPU-259
Change-Id: Ic19239f0f8275683d5d1b981df530acd90e6dfbb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594327
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL mandatory for all map
IOCTLs. We'll clean up the legacy kernel code in subsequent patches.
Remove support for NVGPU_AS_IOCTL_MAP_BUFFER. It has been superseded
by NVGPU_AS_IOCTL_MAP_BUFFER_EX.
Remove legacy definitions to nvgpu_map_buffer_args and the related
flags, and update the in-kernel map calls accordingly by switching to
the newer definitions.
Bug 1902982
Change-Id: Ie9a7f02b8d5d0ec7c3722c4481afab6d39b4fbd0
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560932
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In dbg_set_powergate(), we use flags NVGPU_DBG_GPU_POWERGATE_MODE_DISABLE/ENABLE
which are defined in linux specific uapi header
Hence we need to remove those flags from common code
Update dbg_set_powergate() to receive boolean flag to disable/enable powergate
instead of NVGPU_DBG_GPU_POWERGATE_MODE_DISABLE/ENABLE
Also update corresponding HALs as per above change
Jira NVGPU-259
Change-Id: I9c4eb30e29ea5ce0d8e25517a6a072fb9f0e92e5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594326
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Spew err message for pri_squash, fecserr and pri_timeout
pbus interrupts. If FECS_TGT is set in timeout_save_0,
addr, write fields are not reliable. Also timeout_save_1
is unreliable. For both squash and timeout should have
correct data most of the time. Even for FECS_TGT, a timeout
for a read should indicate the correct transaction as Host
only supports one read at a time. It's mostly just writes
to FECS that have potentially incorrect information.
Bug 200246808
Bug 200350539
Change-Id: I8a992d924ff6c740a8dacecaaaf4ef257756d01d
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1568860
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
dbg_gpu_gk20a.h used implictly definitions that it did not forward
declare or #include definitions for.
Also regops_whitelist fields were unused. The type itself is not
defined anywhere. Delete the fields.
Change-Id: I4b002247c67a4ce4cb54810720b0bbc06381bf83
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593681
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Simplify the copyengine code by deleting support for the
ce_event_callback feature that has never been used. Similarly, create a
channel without the finish callback to get rid of that Linux dependency,
and delete the finish callback function as it now serves no purpose.
Delete also the submitted_seq_number and completed_seq_number fields
that are only written to.
Jira NVGPU-259
Change-Id: I02d15bdcb546f4dd8895a6bfb5130caf88a104e2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1589320
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Power features should be enabled only if s/w flags xxcg_enabled
are set for corresponding features. These flags control whether
feature should be kept disabled in the hardware or not. For disable
case, register programming will happen for CG registers
and they will be set to disabled. For ELPG, init command will be
sent to PMU, but “ELPG_ALLOW” will not be sent to PMU.
Also these flags can be modified using sysfs. These flags
are noop if corresponding can_xxxg flags are set to flase.
S/w flags can_xxxg tell the ability of platform to support
a power feature and cannot be modified by syfs. Setting these
flags to false will avoid any HW register write or init sequence
for the power feature from executing. For ELPG, no commands will
be sent to PMU.
-g->elcg_enabled flag should not be modified here.
It should be modified only by sysfs. This will be cleaned up in
follow up implementation where debug session will have some kind
of lock where it will keep power features disabled as long as it
wants to. Debugger cannot rely on this flag to keep power
management disabled as these flags can be changed from sysfs.
Due to this issue someone can easily break debugging session
by accidentally changing something in sysfs.
Proper fix for this is being tracked in NVGPU-320
Bug 1982434
Change-Id: I660ef02491f4df9910bf4dea3561ac8a0838e1b1
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1587205
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
ioctl_channel.h and cde.h referred to multiple structures that were
not forward declared or explitly #included in. Add several forward
declarations and #includes. Also add #include for
<uapi/linux/nvgpu.h> to multiple Linux .c files that were missing it.
Change-Id: Iefd52e71224d5810b5abbcc765f92bc535d7a28b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1591634
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>