Commit Graph

27 Commits

Author SHA1 Message Date
Sagar Kamble
f55fd5dc8c gpu: nvgpu: multiple address spaces support for subcontexts
This patch introduces following relationships among various nvgpu
objects to support multiple address spaces with subcontexts.
IOCTLs setting the relationships are shown in the braces.

nvgpu_tsg             1<---->n nvgpu_tsg_subctx (TSG_BIND_CHANNEL_EX)
nvgpu_tsg             1<---->n nvgpu_gr_ctx_mappings (ALLOC_OBJ_CTX)

nvgpu_tsg_subctx      1<---->1 nvgpu_gr_subctx (ALLOC_OBJ_CTX)
nvgpu_tsg_subctx      1<---->n nvgpu_channel (TSG_BIND_CHANNEL_EX)

nvgpu_gr_ctx_mappings 1<---->n nvgpu_gr_subctx (ALLOC_OBJ_CTX)
nvgpu_gr_ctx_mappings 1<---->1 vm_gk20a (ALLOC_OBJ_CTX)

On unbinding the channel, objects are deleted according
to dependencies.

Without subcontexts, gr_ctx buffers mappings are maintained in the
struct nvgpu_gr_ctx. For subcontexts, they are maintained in the
struct nvgpu_gr_subctx.

Preemption buffer with index NVGPU_GR_CTX_PREEMPT_CTXSW and PM
buffer with index NVGPU_GR_CTX_PM_CTX are to be mapped in all
subcontexts when they are programmed from respective ioctls.

Global GR context buffers are to be programmed only for VEID0.
Based on the channel object class the state is patched in
the patch buffer in every ALLOC_OBJ_CTX call unlike
setting it for only first channel like before.

PM and preemptions buffers programming is protected under TSG
ctx_init_lock.

tsg->vm is now removed. VM reference for gr_ctx buffers mappings
is managed through gr_ctx or gr_subctx mappings object.

For vGPU, gr_subctx and mappings objects are created to reference
VMs for the gr_ctx lifetime.

The functions nvgpu_tsg_subctx_alloc_gr_subctx and nvgpu_tsg_-
subctx_setup_subctx_header sets up the subcontext struct header
for native driver.

The function nvgpu_tsg_subctx_alloc_gr_subctx is called from
vgpu to manage the gr ctx mapping references.

free_subctx is now done when unbinding channel considering
references to the subcontext by other channels. It will unmap
the buffers in native driver case. It will just release the
VM reference in vgpu case.

Note that TEGRA_VGPU_CMD_FREE_CTX_HEADER ioctl is not called
by vgpu any longer as it would be taken care by native driver.

Bug 3677982

Change-Id: Ia439b251ff452a49f8514498832e24d04db86d2f
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2718760
Reviewed-by: Scott Long <scottl@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2022-09-08 20:59:59 -07:00
Sagar Kamble
f95cb5f4f8 gpu: nvgpu: maintain ctx buffers mappings separately from ctx mems
In order to maintain separate mappings of GR TSG and global context
buffers for different subcontexts, we need to separate the memory
struct and the mapping struct for the buffers. This patch moves
the mappings of all GR ctx buffers to new structure
nvgpu_gr_ctx_mappings.

This will be instantiated per subcontext in the upcoming patches.

Summary of changes:
  1. Various context buffers were allocated and mapped separately.
     All TSG context buffers are now stored in gr_ctx->mem[] array
     since allocation and mapping is unified for them.
  2. Mapping/unmapping and querying the GPU VA of the context
     buffers is now handled in ctx_mappings unit. Structure
     nvgpu_gr_ctx_mappings in nvgpu_gr_ctx holds the maps.
     On ALLOC_OBJ_CTX this struct is instantiated and deleted
     on free_gr_ctx.
  3. Introduce mapping flags for TSG and global context buffers.
     This is to map different buffers with different caching
     attribute. Map all buffers as cacheable except
     PRIV_ACCESS_MAP, RTV_CIRCULAR_BUFFER, FECS_TRACE, GR CTX
     and PATCH ctx buffers. Map all buffers as privileged.
  4. Wherever VM or GPU VA is passed in the obj_ctx allocation
     functions, they are now replaced by nvgpu_gr_ctx_mappings.
  5. free_gr_ctx API need not accept the VM as mappings struct
     will hold the VM. mappings struct will be kept in gr_ctx.
  6. Move preemption buffers allocation logic out of
     nvgpu_gr_obj_ctx_set_graphics_preemption_mode.
  7. set_preemption_mode and gr_gk20a_update_hwpm_ctxsw_mode
     functions need update to ensure buffers are allocated
     and mapped.
  8. Keep the unit tests and documentation updated.

With these changes there is clear seggregation of allocation and
mapping of GR context buffers. This will simplify further change
to add multiple address spaces support. With multiple address
spaces in a TSG, subcontexts created after first subcontext
just need to map the buffers.

Bug 3677982

Change-Id: I3cd5f1311dd85aad1cf547da8fa45293fb7a7cb3
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2712222
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2022-07-15 07:10:11 -07:00
Sagar Kamble
931e5f8220 gpu: nvgpu: update gr_ctx patch and pm setup functions
set_patch_addr parameter to nvgpu_gr_ctx_set_patch_ctx was redundant.
Remove it.

Prepare new functions nvgpu_gr_ctx_set_hwpm_pm_mode to set PM mode,
nvgpu_gr_ctx_set_hwpm_ptr to set PM ptr in gr_ctx. Rename subctx
function to nvgpu_gr_subctx_set_hwpm_ptr.

This simplifies the logic in gr_gk20a_update_hwpm_ctxsw_mode to set
the PM mode and PM ptr. Channel loop is needed only for subcontexts.

Bug 3677982

Change-Id: I44acb09f6296ba8d510e278910188864f39e7157
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2743724
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
2022-07-15 07:10:00 -07:00
Sagar Kamble
bfa20f62c6 gpu: nvgpu: add/remove l2 cache flush when updating the ctx buffers
gr ctx buffer in non-cacheable hence there is no need to do L2 cache
flush when updating the buffer. Remove the flushes.

pm ctx buffer is cacheable hence add l2 flush in the function
nvgpu_profiler_quiesce_hwpm_streamout_non_resident since it
updates the buffer.

Bug 3677982

Change-Id: I0c15ec7a7f8fa250af1d25891122acc24443a872
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2713916
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2022-06-24 12:08:54 -07:00
Sagar Kamble
65e7baf856 gpu: nvgpu: s/NVGPU_GR_CTX_*_VA/NVGPU_GR_GLOBAL_CTX_*_VA
Indices for global ctx buffer virtual address array were named with
prefix GR_CTX and defined in ctx.h. Prefix those with GR_GLOBAL_CTX
and move to global_ctx.h

Also remove the flag global_ctx_buffer_mapped as it is not used.

Bug 3677982

Change-Id: I9042e1c2bd8e8e10e97893484daeff0f97a96ea0
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2704855
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2022-06-24 12:08:33 -07:00
Sagar Kamble
7fa6976a98 gpu: nvgpu: remove dead code
nvgpu_gr_subctx_set_patch_ctx was earlier used in the HAL
gops.gr.ctx_patch_smpc. Usage was removed since that HAL
applies to only gm20b that doesn't support subcontexts.
Remove that function.

gp10b_gr_init_commit_global_attrib_cb is also not used by
any chip, so remove that also.

Bug 3677982

Change-Id: Ief1c1a4038d3eed1cba3a71d83a2a438158f15f3
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2704854
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Scott Long <scottl@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit
2022-06-24 12:08:20 -07:00
Divya
d538737ba1 gpu: nvgpu: Add ELPG_MS protected call for L2 flush
- if L2 flush is done when ELPG_MS feature is engaged
  then it can cause some of the signals to go non-idle.
  This can cause idle snap in ELPG_MS.
- To avoid the idle snap, add elpg_ms protected call before
  L2 flush operation

Bug 200763448

Change-Id: I651875bc051c3b7d26d2bb0b593083512a5765b2
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2599459
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2021-10-22 06:20:13 -07:00
Konsta Hölttä
1b1d183b9c gpu: nvgpu: simplify gmmu map calls
Introduce nvgpu_gmmu_map_partial() to map a specific size of a buffer
represented by nvgpu_mem, or what nvgpu_gmmu_map() used to do. Delete
the size parameter from nvgpu_gmmu_map() such that it now maps the
entire buffer. The separate size parameter is a historical artifact from
when nvgpu_mem did not exist yet; the typical use is to map the entire
buffer.

Mapping at a certain address with nvgpu_gmmu_map_fixed() still takes the
size parameter.

The returned address still has to be stored somewhere, typically to
mem.gpu_va by the caller so that the matching unmap variant finds the
right address.

Change-Id: I7d67a0b15d741c6bcee1aecff1678e3216cc28d2
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601788
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2021-10-01 21:38:43 -07:00
Konsta Hölttä
44422db851 gpu: nvgpu: simplify gmmu unmap calls
Introduce nvgpu_gmmu_unmap_addr() to unmap a nvgpu_mem that was mapped
at some other address than mem.gpu_va, which can be the case for buffers
that are shared across different address spaces. Delete the address
parameter from nvgpu_gmmu_unmap(), as the common case is to store the
address to mem.gpu_va when mapping the buffer.

Modify some instances of consecutive unmap + free calls to call just
nvgpu_dma_unmap_free().

Change-Id: Iecd7c9aa41d04e9f48e055f6bc0c9227cd759c69
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601787
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2021-09-30 16:29:41 -07:00
Sagar Kamble
62b04331de gpu: nvgpu: compile out priv_access_map config/addr hals
These hals are non-safe. Compile them out with
CONFIG_NVGPU_SET_FALCON_ACCESS_MAP.

JIRA NVGPU-5358

Change-Id: I75b46e201fa132e09fee15679a402d24bbf9b2ab
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2581360
(cherry picked from commit d048333ef391019b2618abf7d09c8fe2042f8ee0)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2581841
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2021-09-21 03:17:00 -07:00
Antony Clince Alex
c7d43f5292 gpu: nvgpu: remove usage of CONFIG_NVGPU_NEXT
The CONFIG_NVGPU_NEXT config is no longer required now that ga10b and
ga100 sources have been collapsed. However, the ga100, ga10b sources
are not safety certified, so mark them as NON_FUSA by replacing
CONFIG_NVGPU_NEXT with CONFIG_NVGPU_NON_FUSA.

Move CONFIG_NVGPU_MIG to Makefile.linux.config and enable MIG support
by default on standard build.

Jira NVGPU-4771

Change-Id: Idc5861fe71d9d510766cf242c6858e2faf97d7d0
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2547092
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2021-06-27 05:02:47 -07:00
Deepak Nibade
9034b1676e gpu: nvgpu: compile out GFxP support in safety
GFxP preemption for graphics contexts is not supported in safety.
But the support was enabled along with CONFIG_NVGPU_GRAPHICS since GFxP
preemption was protected under same config.

Add a separate config CONFIG_NVGPU_GFXP to protect all GFxP specific
code, enum values, and HALs.

Disable the config in safety profile.

Jira NVGPU-6893

Change-Id: Iebb5f754a1025dfa6e05a94704bdb8a7123b599a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2534986
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2021-05-28 15:17:36 -07:00
vinodg
89518f3740 gpu: nvgpu: compile out unused code in gr unit
Compile out code not used in safety for gr subunit.
The code is used only with dgpu support.

Jira NVGPU-3968

Change-Id: I7be5b06c6eed5a6d382016f1ccb5dbec63928294
Signed-off-by: vinodg <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2247146
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:10:29 -06:00
Scott Long
f6fc8a0540 gpu: nvgpu: gr: fix misra 2.7 violations
Eliminate several Advisory Rule 2.7 violations in gr code.

Advisory Rule 2.7 states that there should be no unused
parameters in functions.

Jira NVGPU-3178

Change-Id: I415023a297031884b2d1be667551de2a7d8f23ad
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2174007
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-08-13 14:15:06 -07:00
Seshendra Gadagottu
e364102f9a gpu: nvgpu: add graphics flag for gfxp related code
Move GFXP related code under CONFIG_NVGPU_GRAPHICS flag.
Keep the NVGPU_PREEMPTION_MODE_GRAPHICS_WFI support.

JIRA NVGPU-3415

Change-Id: Ie690ac66df4b94eb113a5898d94a892fe0ce7b11
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2135427
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-24 02:46:03 -07:00
Deepak Nibade
1239bf67a5 gpu: nvgpu: add debugger flag for hal.gr.ctxsw_prog unit
Add CONFIG_NVGPU_DEBUGGER flag for debugger specific code in
hal.gr.ctxsw_prog unit
Also add this flag for PM context allocation/free

Jira NVGPU-3506

Change-Id: Ib40569c7617b8b8aa3343fc89f3d8f30b1d21aa6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2132254
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-13 12:05:14 -07:00
Sagar Kamble
3f08cf8a48 gpu: nvgpu: rename feature Make and C flags
Name the Make and C flag variables consistently wih syntax:
CONFIG_NVGPU_<feature name>

s/NVGPU_DEBUGGER/CONFIG_NVGPU_DEBUGGER
s/NVGPU_CYCLESTATS/CONFIG_NVGPU_CYCLESTATS
s/NVGPU_USERD/CONFIG_NVGPU_USERD
s/NVGPU_CHANNEL_WDT/CONFIG_NVGPU_CHANNEL_WDT
s/NVGPU_FEATURE_CE/CONFIG_NVGPU_CE
s/NVGPU_GRAPHICS/CONFIG_NVGPU_GRAPHICS
s/NVGPU_ENGINE/CONFIG_NVGPU_FIFO_ENGINE_ACTIVITY
s/NVGPU_FEATURE_CHANNEL_TSG_SCHED/CONFIG_NVGPU_CHANNEL_TSG_SCHED
s/NVGPU_FEATURE_CHANNEL_TSG_CONTROL/CONFIG_NVGPU_CHANNEL_TSG_CONTROL
s/NVGPU_FEATURE_ENGINE_QUEUE/CONFIG_NVGPU_ENGINE_QUEUE
s/GK20A_CTXSW_TRACE/CONFIG_NVGPU_FECS_TRACE
s/IGPU_VIRT_SUPPORT/CONFIG_NVGPU_IGPU_VIRT
s/CONFIG_TEGRA_NVLINK/CONFIG_NVGPU_NVLINK
s/NVGPU_DGPU_SUPPORT/CONFIG_NVGPU_DGPU
s/NVGPU_VPR/CONFIG_NVGPU_VPR
s/NVGPU_REPLAYABLE_FAULT/CONFIG_NVGPU_REPLAYABLE_FAULT
s/NVGPU_FEATURE_LS_PMU/CONFIG_NVGPU_LS_PMU
s/NVGPU_FEATURE_POWER_PG/CONFIG_NVGPU_POWER_PG

JIRA NVGPU-3624

Change-Id: I8b2492b085095fc6ee95926d8f8c3929702a1773
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2130290
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-11 09:46:24 -07:00
Vinod G
61fb688f1a gpu: nvgpu: Add flag checking for ZCULL code
Add NVGPU_GRAPHICS flag checking for ZCULL specific codes.
Define NVGPU_GRAPHICS flag for ZCULL support.
This flag is disabled for safety build now.

Jira NVGPU-3550

Change-Id: Ifd571a5e64e8fb2dfe02a87458a2986681900a6b
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2127515
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-05-31 04:08:11 -07:00
Alex Waterman
3a764030b1 gpu: nvgpu: Add new mm HAL and move cache code to that HAL
Add a new MM HAL directory to contain all MM related HAL units.
As part of this change add cache unit to the MM HAL. This contains
several related fixes:

1. Move the cache code in gk20a/mm_gk20a.c and gv11b/mm_gv11b.c to
   the new cache HAL. Update makefiles and header includes to take
   this into account. Also rename gk20a_{read,write}l() to their
   nvgpu_ variants.

2. Update the MM gops: move the cache related functions to the new
   cache HAL and update all calls to this HAL to reflect the new
   name.

3. Update some direct calls to gk20a MM cache ops to pass through
   the HAL instead.

4. Update the unit tests for various MM related things to use the
   new MM HAL locations.

This change accomplishes two architecture design goals. Firstly it
removes a multiple HW include from mm_gk20a.c (the flush HW header).
Secondly it moves code from the gk20a/ and gv11b/ directories into
more proper locations under hal/.

JIRA NVGPU-2042

Change-Id: I91e4bdca4341be4dbb46fabd72622b917769f4a6
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2095749
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-04-16 17:06:42 -07:00
Nitin Kumbhar
8664b3be6c gpu: nvgpu: make ctx structs private
Add ctx_priv.h header for structs which are used within
nvgpu_gr_ctx. APIs are added to manage fields of nvgpu_gr_ctx.

JIRA NVGPU-3060

Change-Id: I396fbbb5199e354c62772e901e3bbf61d135f3b1
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090398
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-04-12 04:04:31 -07:00
Sagar Kamble
974ad342fa gpu: nvgpu: update sec2.h header
Update sec2.c to not dereference struct gk20a and update sec2.h to
remove unneeded header files. Move sec2.h to include/nvgpu/sec2.

JIRA NVGPU-2074

Change-Id: I1a8f4b1913323693fae422ce27c4ec0ac29de24a
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2085752
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-04-10 09:26:56 -07:00
Nitin Kumbhar
82b5f356d0 gpu: nvgpu: make nvgpu_gr_subctx a priv struct
Make struct nvgpu_gr_subctx a private struct and add
an api to access subctx header.

JIRA NVGPU-3060

Change-Id: Ia1f0471084f90eddd31ddc6869bd767866f9b4e2
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2088531
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-04-06 04:34:25 -07:00
Deepak Nibade
40e63a7857 gpu: nvgpu: remove ops.gr.set_preemption_buffer_va hal
Add below two new APIs to set preemption buffer in graphics context or
subcontext respectively

nvgpu_gr_ctx_set_preemption_buffer_va()
nvgpu_gr_subctx_set_preemption_buffer_va()

Remove g->ops.gr.set_preemption_buffer_va() hal and use above APIs to
set preemption buffer VA.

Jira NVGPU-1887

Change-Id: I38fb76eaf01d3fc73fd8104f30bcd89be9fa45b6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2076272
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-19 13:45:02 -07:00
Deepak Nibade
5b2eb887d5 gpu: nvgpu: add gr/ctx and gr/subctx APIs to configure patch context
gr_gk20a_ctx_patch_smpc() updates patch countext count and mode by
directly calling g->ops.gr.ctxsw_prog HALs

Move the configuration of patch context to gr/ctx and gr/subctx units
with below APIs and call these from gr_gk20a_ctx_patch_smpc()
nvgpu_gr_ctx_reset_patch_count()
nvgpu_gr_ctx_set_patch_ctx()
nvgpu_gr_subctx_set_patch_ctx()

Jira NVGPU-1527
Jira NVGPU-1613

Change-Id: Ib1ccbc036aa0916e7bd0a002d16b74430a7e47c9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011094
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:38 -08:00
Deepak Nibade
fe27a7f934 gpu: nvgpu: add gr/ctx and gr/subctx APIs to set hwpm ctxsw mode
gr_gk20a_update_hwpm_ctxsw_mode() right now validates the incoming
hwpm mode, checks if it is already set, and if not, it will go ahead
and set the new hwpm mode by calling g->ops.gr.ctxsw_prog HALs

Instead of programming hwpm mode in gr_gk20a.c, move the programming
to gr/ctx and gr/subctx units by adding below APIs
nvgpu_gr_ctx_prepare_hwpm_mode() - validate the incoming mode and
                                   check if it is already set
nvgpu_gr_ctx_set_hwpm_mode() - set pm mode in graphics context
nvgpu_gr_subctx_set_hwpm_mode() - set pm mode in subcontext

Add gpu_va field to struct pm_ctx_desc to store the gpu_va to be
programmed into context

Rename NVGPU_DBG_HWPM_CTXSW_MODE_* to NVGPU_GR_CTX_HWPM_CTXSW_MODE_*
and move them to gr/ctx.h

Remove below HALs since they are no longer used
g->ops.gr.ctxsw_prog.set_pm_mode_no_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_stream_out_ctxsw()

Jira NVGPU-1527
Jira NVGPU-1613

Change-Id: Id2a4d498182ec0e3586dc7265f73a25870ca2ef7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011093
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:34 -08:00
Deepak Nibade
bac95b36d8 gpu: nvgpu: move zcull context setup to gr/ctx and gr/subctx units
In gr_gk20a_ctx_zcull_setup(), we configure context/subcontext with
zcull details
This API now does it directly by calling g->ops.gr.ctxsw_prog HAL

Move all context/subcontext setup to gr/ctx and gr/subctx units
respectively
Define and use below new APIs for same
gr/ctx : nvgpu_gr_ctx_zcull_setup()
gr/subctx : nvgpu_gr_subctx_zcull_setup()

Jira NVGPU-1527
Jira NVGPU-1613

Change-Id: I1b7b16baea60ea45535c623b5b41351610ca433e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011090
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:16 -08:00
Deepak Nibade
254253732c gpu: nvgpu: add new unit for GR subcontext
Add new unit common/gr/subctx.c to manage GR subcontext
This unit provides interfaces to allocate/free/load GR subcontext

Add new header file include/nvgpu/gr/subctx.h to declare all the
interfaces.

Right now channel_gk20a structure directly includes a nvgpu_mem
for context header.
Declare a new structure nvgpu_gr_subctx for subcontext and include
this from channel_gk20a

Make all necessary changes to refer ctx_header from subctx instead
of directly referencing it from channel

Jira NVGPU-1613

Change-Id: I9eb1ee8f26fa88d2881f9b294935b65e9cbcc9b4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990129
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-02 03:03:43 -08:00