The perf inst block was being treated as vidmem (LFB - local
framebuffer) always, regardless of the type of nvgpu_mem used
for the instance block. On dGPUs this was fine becasue we
always allocate instance blocks from vidmem. Inst blocks are
allocated with nvgpu_dma_alloc() which chooses vidmem if
vidmem is present, otherwise falls back to sysmem.
When the above fall back logic was deleted this caused inst
blocks to always be allocated in sysmem, even for dGPUs. This
isn't a problem in an of itself but the logic for the perf
instance block bind operation assumed a VIDMEM inst_block.
Thus this patch uses the nvgpu_aperture_mask() function to
correctly program the required aperture target for the perf's
inst block bind operation.
JIRA NVGPU-990
Change-Id: If6f09a743ee2ad47a6dbfa28cb7c61f1461fd8a7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1796388
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This flag - has_physical_mode - doesn't seem to do much other than
force the PTE/PDE and inst block addresses to be physical instead
of potentially IOMMUed.
There is a reason to do this on volta (nvlink not being IOMMU'able
being the primary reason) but this flag is too general it seems.
The flag was being enabled on all native platforms. The problem is
that some page tables (the maxwell small page directories) could
be larger than 4KB which meant that the allocation used for them
could be potentially discontiguous. Discontiguous page directories
obviously is incorrect.
This patch deletes the has_physical_mode flag and instead replaces
the places where it's checked with a check for nvlink being
enabled. Since we _do_ want to program phyiscal PDEs and PTEs for
NVLINK devices (regardless of IOMMU status they always access
memory by physical address) we need a check for NVLINK state.
Bug 200414723
Change-Id: I09ad86b12d8aabcf9648a22503f4747fd63514dd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1792163
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The nvgpu_ioctl_tsg_open() does not make sure that GPU is
unpowergated. Due to this it leads to kernel
panic when GPU registers are accessed when powergated.
__gk20a_warn_on_no_regs+0x38/0x58 [nvgpu]
__nvgpu_readl+0x74/0xc8 [nvgpu]
nvgpu_readl+0x28/0x60 [nvgpu]
xxxxx_ce_get_num_pce+0x28/0x70 [nvgpu]
xxxxx_fifo_init_eng_method_buffers+0x64/0x1c0 [nvgpu]
gk20a_tsg_open+0x110/0x1e0 [nvgpu]
nvgpu_ioctl_tsg_open+0x88/0x100 [nvgpu]
gk20a_ctrl_dev_ioctl+0x734/0x2388 [nvgpu]
do_vfs_ioctl+0xc4/0x918
SyS_ioctl+0x94/0xa8
This change fixes this issue by calling gk20a_busy()/gk20a_idle()
in nvgpu_ioctl_tsg_open()
Bug 2268533
JIRA NVGPU-1016
Change-Id: I578289e7eb60295d6b6169b754a5cc60f7546fd5
Signed-off-by: Preetham Chandru Ramchandra <pchandru@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1794324
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move implementation of priv_ring HAL to common/priv_ring. Implement
two new HAL APIs to remove illegal dependencies: enable_priv_ring and
enum_ltc.
As enum_ltc can be implemented only gm20b onwards, bump gk20a
implementation to base on gm20b.
JIRA NVGPU-964
Change-Id: I160c2216132aadbcd98bb4a688aeeb2c520a9bc0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1797025
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Export below APIs in gv11b/gr_gv11b.h header so that they can be called from
other files too
gr_gv11b_set_shader_cut_collector()
gr_gv11b_set_go_idle_timeout()
gr_gv11b_set_coalesce_buffer_size()
gr_gv11b_set_tex_in_dbg()
gr_gv11b_set_skedcheck()
gv11b_gr_set_shader_exceptions()
Bug 2260560
Change-Id: Ic85e35bc223c88c2a54fab09851b8a957b4d1153
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1793525
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
These macros exist to make integer literals used in certain arithmetic
operations explicitly large enough to hold the results of that operation.
The following is an example of this.
In MISRA the destination for a bitwise shift must be able to hold the number
of bits shifted. Otherwise the results are undefined. For example:
256U << 20U
This is valid C code but the results of this _may_ be undefined if the size
of an unsigned by default is less than 24 bits (i.e 16 bits). The MISRA misra
checker sees the 256U and determines that the 256U fits in a 16 bit data type
(i.e a u16). Since a u16 has 16 bits, which is less than 20, this is an
issue.
Of course most compilers these days use 32 bits for the default unsigned type
this is not a requirement. Moreover this name problem could exist like so:
0xfffffU << 40U
The 0xfffffU is a 32 bit unsigned type; but we are shifting 40 bits which
overflows the 32 bit data type. So in this case we need an explicit cast to
64 bits in order to prevent undefined behavior.
Change-Id: If2433fb8c44df0c714487fa3b6b056fc84570df7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795391
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now define HAL exec_reg_ops() under gops.dbg_session_ops operations
But we have separate gops.regops operations for all the regops and this would
be logically correct place for exec_reg_ops()
Move exec_reg_ops() from gops.dbg_session_ops to gops.regops
Also rename it to exec_regops()
Jira NVGPU-620
Change-Id: If4f70639ffbc892c605f7540a83bce12ed821b52
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1794999
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-15.6 requires that all loop bodies must be enclosed in braces
including single statement loop bodies. This patch fix the MISRA
violations due to single statement loop bodies without braces by adding
them.
JIRA NVGPU-989
Change-Id: If79f56f92b94d0114477b66a6f654ac16ee8ea27
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1791194
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
CBC base needs to be aligned to 64KB. On Linux this is
achieved making compbit backing size multiple of 64KB.
However QNX nvmap alloc function does not allocate
memory aligned to requested size and needs to overallocate
to satisfy alignment requirement. Make cbc alloc function OS
specific to be able to modify QNX code.
Also align cbc base address to 64KB before writing to CBC BASE
register.
Bug 200426427
Change-Id: Ic867501403f2e2a4ba41ad5a8ed6f9c5c8ffa3f4
Signed-off-by: Aparna Das <aparnad@nvidia.com>
(cherry picked from commit 3f1e1133a46ebfc9763c649d7b839d069cae5a36)
Reviewed-on: https://git-master.nvidia.com/r/1786046
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The error returned from the execution of exec_reg_ops was ignored
leading to not propagating the error values to the caller methods.
This patch handles the error occurence in the exec_reg_ops call.
Bug 2245743
Change-Id: I0d696c116fc1b2fce0e14ac7a05e1d85b5d18129
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1775818
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
clk_arb.h and gk20a.h has circular dependencies to each other. This is
removed by forward declaring struct gk20a in clk_arb.h and removing the
header gk20a.h from clk_arb.h and similarly forward declaring struct
nvgpu_clk_arb in gk20a.h and removing the header clk_arb.h from gk20a.h
alongwith putting headers in every execution unit which calls clk_arb.h
related methods.
JIRA NVGPU-597
Change-Id: I7cedca17206c148b21d93e5d7f0d88c2f98b979a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1790915
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new ioctl to set the SM_EXCEPTION_TYPE_MASK is
added to dbg session.
Currently support SM_EXCEPTION_TYPE_MASK_FATAL type
If this type is set then the code will skip RC recovery,
instead trigger CILP preemption.
bug 200412641
JIRA NVGPU-702
Change-Id: I4b1f18379ee792cd324ccc555939e0f4f5c9e3b4
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1729792
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix MISRA rule 10.1 violations involving need_reset var
in gk20a_gr_isr().
Changed type to bool and set it to true any time one of
the pending condition checks returns non-zero.
JIRA NVGPU-650
Change-Id: I2f87b68d455345080f7b4c68cacf515e074c671a
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1793633
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix MISRA rule 10.1 violations in gr_gk20a_init_ctx_vars_sim().
Instead of logically ORing alloc_xxx_list_yyy() results into
the signed err variable just bail immediately if an allocation
request fails.
Also made changes to sync gr_gk20a_init_ctx_vars_sim() behavior
with gr_gk20a_init_ctx_vars_fw() behavior:
* return a valid errno on failure
* free any previously allocated resources on failure
JIRA NVGPU-650
Change-Id: Ie5ea78438da59896da2a9f562d01e46ffaf56dec
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1787042
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In nvgpu_ioctl_channel_reg_ops(), we right now first check if context is
allocated or not and if context is not allocated we fail the regops operation
But it is possible that the regops operation only includes global regops which
does not need global context allocated
So move this global context check from nvgpu_ioctl_channel_reg_ops() to
exec_regops_gk20a() and only if we have context ops included in the regops
Bug 200431958
Change-Id: Iaa4953235d95b2106d5f81a456141d3a57603fb9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1789262
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
once gpu is powered off i.e. power_on set to false, nvgpu isr
does not handle stall/nonstall irq. Depending upon state
of gpu, this can result in either of following errors:
1) irq 458: nobody cared (try booting with the "irqpoll" option)
2) "HSM ERROR 42, GPU" from SCE if it detects that an interrupt is
not in time.
Fix these by masking all interrupts just before gpu power off
as nvgpu won't be handling any irq anymore.
While masking interrupts, if there are any pending interrupts,
then report those with a log message.
Bug 1987855
Bug 200424832
Change-Id: I95b087f5c24d439e5da26c6e4fff74d8a525f291
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770802
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Renamed "struct pmu_queue" to "struct
nvgpu_falcon_queue" & moved to falcon.h
-Renamed pmu_queue_* functions to flcn_queue_* &
moved to new file falcon_queue.c
-Created ops for queue functions in struct
nvgpu_falcon_queue to support different queue
types like DMEM/FB-Q.
-Created ops in nvgpu_falcon_engine_dependency_ops
to add engine specific queue functionality & assigned
correct HAL functions in hal*.c file.
-Made changes in dependent functions as needed to replace
struct pmu_queue & calling queue functions using
nvgpu_falcon_queue data structure.
-Replaced input param "struct nvgpu_pmu *pmu" with
"struct gk20a *g" for pmu ops pmu_queue_head/pmu_queue_tail
& also for functions gk20a_pmu_queue_head()/
gk20a_pmu_queue_tail().
-Made changes in nvgpu_pmu_queue_init() to use nvgpu_falcon_queue
for PMU queue.
-Modified Makefile to include falcon_queue.o
-Modified Makefile.sources to include falcon_queue.c
Change-Id: I956328f6631b7154267fd5a29eaa1826190d99d1
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1776070
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The use of the _THIS_IP_ macro in nvgpu introduces two separate
MISRA Rule 11.6 violations.
The first is when when the label address (which gcc generates as
a void *) is cast to an unsigned long and the second is when that
unsigned long is cast back to a void * in the timer and kmem code
that track the value.
Skipping the intermediate use of unsigned long eliminates these
violations. To do this, references to _THIS_IP_ are replaced
with a new (compliant) _NVGPU_GET_IP_ macro.
JIRA NVGPU-895 : MISRA Rule 11.6 violations
Change-Id: I5ea999d8e2b467257fa190b485fa971adcbd0a2b
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1774531
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 11.6 prohibits the casting of an integer value to a
void *.
The nvgpu allocator used for the fence pool stores the base
address of the associated memory as a u64 and returns it via
nvgpu_alloc_base().
In gk20a_free_fence_pool() this u64 value was cast to a void *
before being passed to nvgpu_vfree() (leading to the violation).
This change modifies gk20a_free_fence_pool() to cast the base
address back to the original struct gk20a_fence * to eliminate
the violation.
JIRA NVGPU-895: MISRA Rule 11.6 violations
Change-Id: If89cf2c1bc8ea4b0b59da4cf8b1c167738f6badc
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1774530
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In the current code, gk20a.h includes io.h which gets directly included
in a lot of other files. io.h contains methods which uses a struct
gk20a as a parameter leading to a circular dependency between io.h
and gk20a.h. This can be mitigated by removing io.h from gk20a.h as
part of larger effort to moving gk20a.h to nvgpu/gk20a.h
JIRA NVGPU-597
Change-Id: I93e504fa9371b88152737b342a75580c65e8f712
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1787316
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-For Si platforms, gk20a_get_gr_idle_timeout returns
3000 ms i.e. 3 sec. Currently this time is used for
preempt polling and this conflicts with channel
timeout if polling times out. Use fifo_eng_timeout_us converted
to ms for preempt polling.
-In case of preempt timeout, do not issue recovery
for si platform. ctxsw timeout will trigger recovery
if needed. For non si platforms, issue preempt timeout rc
if preempt times out.
Bug 2113657
Bug 2064553
Bug 2038366
Bug 2028993
Bug 200426402
Change-Id: I8d9f58be9ac634e94defa92a20fb737bf256d841
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1762076
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Recovery can be called for various types of faults. Acquire
runlist_lock for all runlists so that current teardown is done
before proceeding to next one.
-For legacy chips teardown is done by triggering mmu fault so
make sure runlist_locks are acquired during teardown and also
during handling mmu fault.
-gk20a_fifo_handle_mmu_fault is renamed as
gk20a_fifo_handle_mmu_fault_locked
-gk20a_fifo_handle_mmu_fault called from gk20a_fifo_teardown_ch_tsg
is replaced with gk20a_fifo_handle_mmu_fault_locked
-gk20a_fifo_handle_mmu_fault acquires/release runlist_lock for all
runlists and calls gk20a_fifo_handle_mmu_fault_locked
Bug 2113657
Bug 2064553
Bug 2038366
Bug 2028993
Change-Id: I973d7ddb6924b50bae2d095152867e99c87e780a
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761197
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For IOCTL NVGPU_DBG_GPU_IOCTL_ACCESS_FB_MEMORY, we do not allow size of buffer
which is not 4 byte aligned
Remove this hard restriction and allow non 4 byte aligned buffer sizes too
since we don't really need to enforce this restriction
Bug 2265535
Change-Id: Ic4d60604be3698e8629f2b289c9e2d19e20ea525
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1784511
Reviewed-by: Kajetan Dutka <kdutka@nvidia.com>
Tested-by: Kajetan Dutka <kdutka@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to avoid the circular dependencies,
rearrange the static inline functions from
gk20a.h file.
Moved gk20a_gr_flush_channel_tlb function to
gr_gk20a.c and removed the #include gr_gk20a.h
from gk20a.h
Added a helper function utils.h to
move all generic static inline functions which
have no reference to gpu related structures.
ptimer related functions are moved to
ptimer.h
Implementations for as and pmu are moved to
corresponding files.
JIRA NVGPU-624
Change-Id: I4e956326e773ba037bf3a1696cc4c462085dbbe5
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1781941
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added debugfs node under ltc directory with name:
intr_illegal_compstat_enable
Enabling/disabling of ltc_illegal_compstat intr is
possible through debugfs node.
Since ltc state is lost with rail gate, this setting is
cached and will be populated during ltc initialization.
Bug 2099406
Change-Id: I4bf62228dfd2bbb94f87f923f9f4f6e5ad0b07f0
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1774683
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Illegal compstat interrupt indicates an unexpected compression status
given the kind. Since dirty tile mappings expected to have discrepancies
in compbit state, so disabling illegal compstat interrupt.
Bug 2099406
Change-Id: I90207c6bc8a8cfa656ea9a0b4f5605106751c12e
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1774572
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- adds static tpc-powergating through sysfs.
- active tpc count will remain till the GPU/systems is not booted again.
- tpc_pg_mask can be written only after GPU probe finishes and
GPU boot is triggered.
Note:
To be able to use this feature, we need to change boot/init
scripts of the OS(used with nvgpu driver) to write to sysfs nodes before
posting discover image size query to FECS.
Bug 200406784
Change-Id: Id749c7a617422c625f77d0c1a9aada2eb960c4d0
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1742422
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>