g->ops.gr.alloc_gr_ctx HAL right now allocates graphics context and
also initializes preemption mode for various platforms
Separate out a new HAL g->ops.gr.init_ctxsw_preemption_mode that
initializes preemption mode and call it from gk20a_alloc_obj_ctx()
after context is created
g->ops.gr.alloc_gr_ctx now only allocates the context as the name
suggests
Jira NVGPU-1527
Change-Id: I8a44672d5ab2ebfe315e6334115265e4ee4f24f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972254
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added gpc_ppc_count, pes_tpc_count and pes_tpc_mask to constants so they
are used to init the according structure in gr.
They are needed to support get_nonpes_aware_tpc.
Bug 200452721
Jira GVSCI-91
Change-Id: Idbb59daab58c9877eab373439771b19c28cbcef1
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1967997
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add separate new unit gr/ctxsw_prog that provides interface to access
h/w header files hw_ctxsw_prog_*.h
Add below chip specific files that access above h/w unit and provide
interface through g->ops.gr.ctxsw_prog.*() HAL for rest of the units
common/gr/ctxsw_prog/ctxsw_prog_gm20b.c
common/gr/ctxsw_prog/ctxsw_prog_gp10b.c
common/gr/ctxsw_prog/ctxsw_prog_gv11b.c
Remove all the h/w header includes from rest of the units and code.
Remove direct calls to h/w headers ctxsw_prog_*() and use HALs
g->ops.gr.ctxsw_prog.*() instead
In gr_gk20a_find_priv_offset_in_ext_buffer(), h/w header
ctxsw_prog_extended_num_smpc_quadrants_v() is only defined on gk20a
And since we don't support gk20a remove corresponding code
Add missing h/w header ctxsw_prog_main_image_pm_mode_ctxsw_f() for
some chips
Add new h/w header ctxsw_prog_gpccs_header_stride_v()
Jira NVGPU-1526
Change-Id: I170f5c0da26ada833f94f5479ff299c0db56a732
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966111
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add gk20a_channel_from_id() to retrieve a channel, given a raw channel
ID, with a reference taken (or NULL if the channel was dead). This makes
it harder to mistakenly use a channel that's dead and thus uncovers bugs
sooner. Convert code to use the new lookup when applicable; work remains
to convert complex uses where a ref should have been taken but hasn't.
The channel ID is also validated against FIFO_INVAL_CHANNEL_ID; NULL is
returned for such IDs. This is often useful and does not hurt when
unnecessary.
However, this does not prevent the case where a channel would be closed
and reopened again when someone would hold a stale channel number. In
all such conditions the caller should hold a reference already.
The only conditions where a channel can be safely looked up by an id and
used without taking a ref are when initializing or deinitializing the
list of channels.
Jira NVGPU-1460
Change-Id: I0a30968d17c1e0784d315a676bbe69c03a73481c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The return type of the function pointer *calc_global_ctx_buffer_size()
is changed from int to u32 and all its implementations.
The arg type of size in *set_big_page_size() is changed from int to
u32 and all it implementations. These changes are necessary because
size should be an unsigned value.
JIRA NVGPU-992
Change-Id: I3e4cd1d83749777aa8588a44a48772e26f190c4d
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1950503
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 21.15 prohibits use of memcpy() with incompatible ptrs
to qualified/unqualified types.
To circumvent this issue we've introduced a new MISRA-compliant
nvgpu_memcpy() function.
This change switches non-offending memcpy usage in vgpu/* code
over to to use nvgpu_memcpy() with appropriate casts applied
to maintain consistency within nvgpu.
JIRA NVGPU-849
Change-Id: Ib62d77929d60bbda64c51d3d81adf66ed155a8a9
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946262
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for all 17.7 violations instandard C functions
in GPU specific files.
JIRA NVGPU-1036
Change-Id: Iefadc38bdbea4f02de3c24b6ad1c71d6eb0af4bd
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929903
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename gk20a/dbg_gpu_gk20a.c to common/debugger.c and make it a
separate common unit
Also rename gk20a/dbg_gpu_gk20a.h to include/nvgpu/debugger.h
We had two different HALs for debugger - gops.debugger and
gops.dbg_session_ops
Combine them into one single HAL gops.debugger and remove
gops.dbg_session_ops
Rename all exported APIs from debugger.h to be in the form of
nvgpu_*()
Jira NVGPU-1013
Change-Id: I136dc7786e3b2065921eb03b99f16049212f3cd2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1920075
Reviewed-by: Sachin Jadhav <sachinj@nvidia.com>
Tested-by: Sachin Jadhav <sachinj@nvidia.com>
Add READ_SM_ERROR IOCTL support to TSG level.
Moved the struct to save the sm_error details
from gr to tsg as the sm_error support is context
based, not global.
Also corrected MISRA 21.1 error in header file.
nvgpu_dbg_gpu_ioctl_write_single_sm_error_state and
nvgpu_dbg_gpu_ioctl_read_single_sm_error_state
functions are modified to use the tsg struct
nvgpu_tsg_sm_error_state.
Bug 200412642
Change-Id: I9e334b059078a4bb0e360b945444cc4bf1cc56ec
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1794856
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Write new pm mode to context buffer header. Ucode use
this mode to enable mode-e context switch. This is Mode-B
context switch of PMs with Mode-E streamout on one context.
If this mode is set, Ucode makes sure that Mode-E pipe
(perfmons, routers, pma) is idle before it context switches PMs.
- This allows us to collect counters in a secure way
(i.e. on context basis) with stream out.
Bug 2106999
Change-Id: I5a7435f09d1bf053ca428e538b0a57f3a175ac37
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1760366
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
NVGPU_DBG_GPU_IOCTL_PC_SAMPLING ioctl is not handled properly for HV
case for both Linux and QNX. Currently guest vm is trying to perform
gpu memory read and write operations which supposed to be done by RM
server, causing the crash. This patch is supposed to fix ioctl failure.
Bug 2052040
Change-Id: Ia0773959b84739a1bced858331764751520a3561
Signed-off-by: Prateek Sethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1708102
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Switch all logging to nvgpu_log*(). gk20a_dbg* macros are
intentionally left there because of use from other repositories.
Because the new functions do not work without a pointer to struct
gk20a, and piping it just for logging is excessive, some log messages
are deleted.
Change-Id: I00e22e75fe4596a330bb0282ab4774b3639ee31e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704148
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Most of VGPU code is linux specific but lies in common code
So until VGPU code is properly abstracted and made os-independent,
move all of VGPU code to linux specific directory
Handle corresponding Makefile changes
Update all #includes to reflect new paths
Add GPL license to newly added linux files
Jira NVGPU-387
Change-Id: Ic133e4c80e570bcc273f0dacf45283fefd678923
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599472
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use nvgpu_alloc_obj_ctx_args structure specific to Linux code only.
Pass the fields of the structure as separate arguments to all common
functions.
gr_ops_gp10b.h referred to the struct, but it's not used anywhere,
so delete the file.
JIRA NVGPU-259
Change-Id: Idba78d48de1c30f205a42da2fe47a9f8c03735f1
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1586563
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Handle the possibility of failing gr init due to smid table initialization
failures
bug 2004378
Change-Id: I904b918a0ea31c32292edb3ab8ac3b1459c38a28
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1581661
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Move most of the dma_buf usage present in the mm_gk20a.c code
out to Linux specific code and some commom/mm code. There's
two primary groups of code:
1. dma_buf priv field code (for holding comptag data)
2. Comptag usage that relies on dma_buf pointers
For (1) the dma_buf code was simply moved to common/linux/dmabuf.c
since most of this code is clearly Linux specific.
The comptag code was a bit more complicated since there is two
parts to the comptag code. Firstly there's the code that manages
the comptag memory. This is essentially a simple allocator. This
was moved to common/mm/comptags.c since it can be shared across
all chips. The second set of code is moved to
common/linux/comptags.c since it is the interface between dma_bufs
and the comptag memory.
Two other fixes were done as well:
- Add struct gk20a to the comptag allocator init so that
the proper nvgpu_vzalloc() function could be used.
- Add necessary includes to common/linux/vm_priv.h.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: I96c57f2763e5ebe18a2f2ee4b33e0e1a2597848c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566628
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Instead of calling the native HAL init function then adding
multiple layers of modification for VGPU, flatten out the sequence
so that all entry points are set statically and visible in a
single file.
JIRA ESRM-30
Change-Id: Ie424abb48bce5038874851d399baac5e4bb7d27c
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574616
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now remove a channel from TSG list and disable all the channels in
TSG while removing a channel from TSG
With this sequence if any one channel in TSG is closed, rest of the channels
are set as timed out and cannot be used anymore
We need to fix this sequence as below to allow removing a channel from active
TSG so that rest of the channels can still be used
- disable all channels of TSG
- preempt TSG
- check if CTX_RELOAD is set if support is available
if CTX_RELOAD is set on channel, it should be moved to some other channel
- check if FAULTED is set if support is available
- if NEXT is set on channel then it means channel is still active
print out an error in this case for the time being until properly handled
- remove the channel from runlist
- remove channel from TSG list
- re-enable rest of the channels in TSG
- clean up the channel (same as regular channels)
Add below fifo operations to support checking channel status
g->ops.fifo.tsg_verify_status_ctx_reload
g->ops.fifo.tsg_verify_status_faulted
Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106
as gm20b_fifo_tsg_verify_status_ctx_reload()
This API will check if channel to be released has CTX_RELOAD set, if yes
CTX_RELOAD needs to be moved to some other channel in TSG
Remove static from channel_gk20a_update_runlist() and export it
Bug 200327095
Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560637
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This function is an internal function to the VM manager that allocates
virtual memory space in the GVA allocator. It is unfortunately used in
the vGPU code, though. In any event, this patch cleans up and moves the
implementation of these functions into the VM common code.
JIRA NVGPU-12
JIRA NVGPU-30
Change-Id: I24a3d29b5fcb12615df27d2ac82891d1bacfe541
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1477745
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
RM server retrieves auxiliary data only from IVM.
Modify IVC commands to send auxiliary data to RM
server using IVM and not as a part command message.
VFND-4166
Change-Id: I9bfe33cf9301f7c70709318b810c622ec57b1cdf
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: http://git-master/r/1484130
Reviewed-by: svcboomerang <svcboomerang@nvidia.com>
Tested-by: svcboomerang <svcboomerang@nvidia.com>
This patch begins the major rework of the GPU's virtual memory manager
(VMM). The VMM is the piece of code that handles the userspace interface
to buffers and their mappings into the GMMU. The core data structure is
the VM - for now still known as 'struct vm_gk20a'. Each one of these
structs represents one addres space to which channels or TSGs may bind
themselves to.
The VMM splits the interface up into two broad categories. First there's
the common, OS independent interfaces; and second there's the OS specific
interfaces.
OS independent
--------------
This is the code that manages the lifetime of VMs, the buffers inside
VMs (search, batch mapping) creation, destruction, etc.
OS Specific
-----------
This handles mapping of buffers represented as they are represented by
the OS (dma_buf's for example on Linux).
This patch is by no means complete. There's still Linux specific functions
scattered in ostensibly OS independent code. This is the first step. A
patch that rewrites everything in one go would simply be too big to
effectively review.
Instead the goal of this change is to simply separate out the basic
OS specific and OS agnostic interfaces into their own header files. The
next series of patches will start to pull the relevant implementations
into OS specific C files and common C files.
JIRA NVGPU-12
JIRA NVGPU-30
Change-Id: I242c7206047b6c769296226d855b7e44d5c4bfa8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464939
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Hide the Linux specific nvgpu_mem fields so that in subsequent patches
core code can instead of using struct sg_table it can use mem_desc.
Routines for accessing system specific fields will be added as needed.
This is the first step in a fairly major overhaul of the GMMU mapping
routines. There are numerous issues with the current design (or lack
there of): massively coupled code, system dependencies, disorganization,
etc.
JIRA NVGPU-12
JIRA NVGPU-30
Change-Id: I2e7d3ae3a07468cfc17c1c642d28ed1b0952474d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464076
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a_err() and gk20a_warn() require a struct device pointer,
which is not portable across operating systems. The new nvgpu_err()
and nvgpu_warn() macros take struct gk20a pointer. Convert code
to use the more portable macros.
JIRA NVGPU-16
Change-Id: I071e8c50959bfa81730ca964d912bc69f9c7e6ad
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1457355
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Replace the last of the Linux kmem API usage with nvgpu kmem
calls instead. Several places are left alone - allocating the
struct gk20a in particular.
Also one function was updated in the clk code to take a struct
gk20a as an argument so that it could use nvgpu_kmalloc().
Bug 1799159
Bug 1823380
Change-Id: I84fc3f8e19c63d6265bac6098dc727d93e3ff613
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1331702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>