Commit Graph

2924 Commits

Author SHA1 Message Date
Deepak Nibade
221475f753 gpu: nvgpu: add profiler apis to manage PMA stream
Support new IOCTL to manage PMA stream meta data by adding below API
nvgpu_prof_ioctl_pma_stream_update_get_put()

Add nvgpu_perfbuf_update_get_put() to handle all the updates coming
from userspace and to pass all required information.

Add gops.perf.update_get_put() to handle all HW accesses required in
perf HW unit.

Add gops.perf.bind_mem_bytes_buffer_addr() to bind the available bytes
buffer while binding HWPM streamout.

Bug 2510974
Jira NVGPU-5360

Change-Id: Ibacc2299b845e47776babc081759dfc4afde34fe
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2406484
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
5844151a93 gpu: nvgpu: add profiler apis to alloc/free pma stream
Add two new IOCTL APIs to allocate/free pma stream. Add two new
functions to handle this :
nvgpu_prof_ioctl_alloc_pma_stream()
nvgpu_prof_ioctl_free_pma_stream()

Allocation of pma stream includes below steps :
- Initializing perfbuf VM
- Mapping PMA buffer into perfbuf VM
- Mapping PMA byte buffer into perfbuf VM
- Mapping PMA byte buffer to CPU virtual address space

Store all of above data in struct nvgpu_profiler_object for
reference. OS specific data is stored in struct
nvgpu_profiler_object_priv

Update HWPM streamout bind/unbind sequence to enable/disable perfbuf
respectively.

Also take care of releasing the pma stream resources in profiler object
close path if they are not explicitly released by user space by IOCTL
call.

Bug 2510974
Jira NVGPU-5360

Change-Id: I126633746cabc4e293c7ad7c49806866a897949d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2406483
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Vedashree Vidwans
49c9f0c137 gpu: nvgpu: accept user vma size in vm init
Modify nvgpu_vm_init to accept low_hole, user_reserved and
kernel_reserved. This will simplify argument limit checks and make code
more legible.

JIRA NVGPU-5302

Change-Id: I62773dd7b06264a3b6cb8896239b24c49fa69f9b
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2394901
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Lakshmanan M
aef3367ca5 gpu: nvgpu: Add multi GR gr_config utilty support
This CL covers the following code changes,
1) Added api to get the gr_config per gr_instance_id basis.
2) Added api to covert from gpu_instance_id to gr_instance_id.
3) Modified nvgpu_gr_exec_with_ret_for_instance() utility to handle
   generic data return type.

JIRA NVGPU-5662
JIRA NVGPU-5663

Change-Id: I4ab732e15cdbda25672975f99e23b5e5d27decb0
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2413195
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
ebb66b5d50 gpu: nvgpu: add macros to get current GR instance
Add macros to get current GR instance id and the pointer
nvgpu_gr_get_cur_instance_ptr()
nvgpu_gr_get_cur_instance_id()

This approach makes sure that the caller is getting GR instance pointer
under mutex g->mig.gr_syspipe_lock in MIG mode. Trying to access
current GR instance outside of this lock in MIG mode dumps a warning.

Return 0th instance in case MIG mode is disabled.

Use these macros in nvgpu instead of direct reference to
g->mig.cur_gr_instance.

Store instance id in struct nvgpu_gr. This is to retrieve GR instance
id in functions where struct nvgpu_gr pointer is already available.

Jira NVGPU-5648

Change-Id: Ibfef6a22371bfdccfdc2a7d636b0a3e8d0eff6d9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2413140
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
db20451d0d gpu: nvgpu: fix pmm chiplet offsets
gr_gv100_init_hwpm_pmm_register() and gr_gv100_set_pmm_register() right
now assume common chiplet stride for all sys/fbp/gpc and use common API
g->ops.perf.get_pmm_per_chiplet_offset() to get the stride.

Chiplet strides are same for all partitions only by chance, and future
chip might change that.

Hence add and use below 3 separate HALs to get appropriate strides.
g->ops.perf.get_pmmsys_per_chiplet_offset()
g->ops.perf.get_pmmgpc_per_chiplet_offset()
g->ops.perf.get_pmmfbp_per_chiplet_offset()

Also store sys/fbp/gpc perfmon count in struct gk20a after first query
instead of querying them again and again. Querying the counts from HW
is time consuming.

Bug 2510974
Jira NVGPU-5360

Change-Id: I186009221009780d561617c0cd6f535854db585f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2413108
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
d419005222 gpu: nvgpu: NULL check config->gpc_zcb_count in MIG mode
config->gpc_zcb_count is not allocated in MIG mode. Add NULL checks
before accessing this in case it is not allocated.

Jira NVGPU-5648

Change-Id: I4c1169772310ae4776063a91ba298af9e5bfe874
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2413840
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
mkumbar
13ca3c9a37 gpu: nvgpu: boot enabled GPC’s using SEC2 RTOS ucode
Read mask of GPC’s to boot only enabled GPC’s and discard
floorswept GPC.
Read GPC’s mask info need to send to SEC2 RTOS to bootstrap
enabled GPC’s.

JIRA NVGPU-5466

Change-Id: Id4ed7d4072730da8e128cd43af92a1a6b1aac8ad
Signed-off-by: mkumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2394004
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: Dinesh T <dt@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
6a69ea235e gpu: nvgpu: disable graphics specific init functions in MIG mode
MIG mode does not support graphics, ELPG, and use cases like TPC
floorsweeping. Skip all such initialization functions in common.gr
unit if MIG mode is enabled.

Set can_elpg to false if MIG mode is enabled.

Jira NVGPU-5648

Change-Id: I03656dc6289e49a21ec7783430db9c8564c6bf1f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2411741
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
7a937a6190 gpu: nvgpu: add debug logs for common.gr debugging
Add separate flag gpu_dbg_gr to enable common.gr specific debugging.
Add this flag to all the existing debug logs that use gpu_dbg_fn or
gpu_dbg_info for debugging. Also add many other debugging logs that
might be helpful in debugging.

Removing debug log in gv11b_gr_init_get_nonpes_aware_tpc() as it dumps
too much data that does not seem useful.

Batch all interrupt enable functions in gr_init_setup_hw() together for
readability.

Jira NVGPU-5648

Change-Id: I0b857650122cdb1f974b452d28c26e7f142baf61
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2411740
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Seeta Rama Raju
64b3d25921 gpu: nvgpu: Fix for Regular coverity(Vanilla) violations
- Fixing the vanilla violation of deadcode.
  When "aperture == APERTURE_INVALID" or "aperture >= APERTURE_MAX_ENUM",
  then we are handling this condition at starting of function, then it never
  go to switch cases of "APERTURE_INVALID" and "APERTURE_MAX_ENUM".

JIRA NVGPU-6051

Change-Id: I94056aa9e3cb2419e2841976b1d64e9714dc7bcc
Signed-off-by: Seeta Rama Raju <srajum@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2411341
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Dinesh T <dt@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Vedashree Vidwans
e0dd79cd43 gpu: nvgpu: rearch mc reset and enable hals
Remove current mc hals
- mc.reset()
- mc.enable()
- mc.disable()
- mc.reset_mask()
- mc.reset_engine()
- mc.reset_engine_enable()

Add new mc hals
- mc.enable_units(g, units, enable)
  > enable/disable given unit(s)
- mc.enable_dev(g, dev, enable)
  > enable/disable engine represented by given device pointer
- mc.enable_devtype(g, devtype)
  > enable/disable all engines of given devtype

Move common mc intr functions to common/mc/mc_intr.c.
Add below common mc functions
- nvgpu_mc_reset_units(g, units)
  > reset given logical OR of nvgpu unit bitmap
- nvgpu_mc_reset_dev(g, dev)
  > reset given single engine via dev
  > if engine is graphics, reset gpcs for nvgpu_next
- nvgpu_mc_reset_devtype(g, devtype)
  > reset all engines of given devtype
  > if devtype is graphics, reset gpcs for nvgpu_next

Bug 200648985
Bug 3109773

Change-Id: Idc67a14a0a7cde83de44fbfbec13007fead3ed5c
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2408523
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
e6e7561084 gpu: nvgpu: execute nvgpu_gr_init_support for each GR instance
nvgpu_gr_init_support() right now executes each of its function for each
GR instance separately. Instead of looping for each function, move the
GR engine initialization sequence to a separate gr_init_support_impl()
and execute this function for each instance.

Update below functions to take nvgpu_gr pointer as parameter. These
functions need not worry about GR instance, instead they'll just operate
on provided instance pointer.
gr_init_setup_hw
gr_init_config
gr_init_setup_sw
gr_init_sm_id_config_early
gr_init_ctxsw_falcon_support

Add new static function gr_init_support_finalize() to set the ready
status and invoke waiters. Execute this per GR instance.

gr_init_ecc_init() and nvgpu_cg_elcg_enable_no_wait() are not needed to
be run per instance.
gr_init_ecc_init() will be later updated to allocate meta data for all
instances

Jira NVGPU-5648

Change-Id: Ia6860f2bdfe0080aebf8930266d3f51bfd805e36
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2410703
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
bafeea3530 gpu: nvgpu: setup HW for each GR instance
Get number of SMs from GR instance specific nvgpu_gr_config pointer
instead of global SM count in below functions :
nvgpu_gr_fs_state_init()
gv11b_gr_init_sm_id_config()

Update nvgpu_gr_config_get_gpc_skip_mask() to return 0 in case gpc_index
is greater than available gpc_count. This is not MIG specific, but based
on code review possible even today for existing chips.
See gm20b_gr_init_pd_skip_table_gpc()

Update nvgpu_gr_get_override_ecc_val() to return GR instance specific
value.

Execute gr_init_setup_hw() for each GR instance.

Disable below failing unit tests:
nvgpu_gr_fs_state.test_gr_fs_state_error_injection
nvgpu_gr_init.test_gr_init_hal_config_error_injection

Jira NVGPU-5648

Change-Id: Ie8f1c0c304c634756786d85facf336a5c9ae8195
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2410702
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
3df2ed4f82 gpu: nvgpu: setup SW for each GR instance
Execute gr_init_setup_sw() for each GR instance.
Update all of the functions called from this function to receive
nvgpu_gr pointer explicitly.

Separate out nvgpu_gr_zbc_init() call to gr_init_setup_sw() and rename
gr_init_ctx_and_map_zbc() to gr_init_ctx_bufs() for more clarity.

Call gr_init_ecc_init() from nvgpu_gr_init_support() since this does not
need to be executed per GR instance.

Initialize mutex etc in nvgpu_gr_alloc() for consistency.

Jira NVGPU-5648

Change-Id: I8e990e11458c05c1b53a4d6710cc2ec3545762a8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2410701
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
deepak goyal
215403552f gpu: nvgpu: falcon2 core loading support
- Added ops for new core.
- Added firmware structs for new core.

JIRA NVGPU-5736

Change-Id: Ifebc8987bf3a749803c1c5539e7d08716c1842a4
Signed-off-by: deepak goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2372104
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Lakshmanan M
47dc015b86 gpu: nvgpu: Add physical gpu instance support
This patch added the physical gpu intance support when MIG
is enabled.

JIRA NVGPU-5647

Change-Id: Ic642b88ebc70ea6114e63c2287db8bca00860c67
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2410698
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
83691e088f gpu: nvgpu: initialize ctx state for each GR instance
Execute nvgpu_gr_init_ctx_state() for each GR instance. Move it under
gr_init_ctxsw_falcon_support() which is already executed for each
instance.

Update the API to accept struct nvgpu_gr pointer for convenience. API
does not need to know about other instances.

For reset path, continue using g->gr instead of specific instance.
This will be revisited when entire reset path is refactored.

Jira NVGPU-5648

Change-Id: I8879bf3b44bb01f6b8053f1aecbd550f49837520
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2409535
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
8d2cb311cb gpu: nvgpu: return current GR instance pointers
Update below APIs to return current GR instance specific pointers
instead of 0th instance specific pointers

nvgpu_gr_get_falcon_ptr()
nvgpu_gr_get_config_ptr()
nvgpu_gr_get_intr_ptr()

Jira NVGPU-5648

Change-Id: Id9608fb40a1f23ec3466cb205002c10b40d08876
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2409534
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
3b746dce0c gpu: nvgpu: use a falcon flag instead of enabled bit
common.gr unit right now makes use of a capability bit
NVGPU_PMU_FECS_BOOTSTRAP_DONE to ensure the recovery path hits a
different routine. This is actually needless and a common check
cannot be used for all GR instances anyways.

Delete this capability bit. Add and use a new flag
coldboot_bootstrap_done added under struct nvgpu_gr_falcon

Jira NVGPU-5648

Change-Id: I46faea6f07cf054f17a3215d4cbbe0fc8a6382ae
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2409533
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
b6c72410bb gpu: nvgpu: execute CTXSW ucode initialization per GR instance
Move CTXSW ucode initialization to separate static API
gr_init_ctxsw_falcon_support() and execute this per GR instance with
nvgpu_gr_exec_with_ret_for_each_instance()

Jira NVGPU-5648

Change-Id: I6e0fa72bd568eaac027bb12edcdf90255336f0a1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2409532
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Seshendra Gadagottu
43242fa878 gpu: nvgpu: init ctxsw state after gr reset
Ctxsw state will be lost after gr reset. After gr reset
in recovery sequence, re-initialize ctxsw state to send
below fecs methods:
gr_fecs_method_push_adr_discover_image_size_v()
gr_fecs_method_push_adr_discover_pm_image_size_v()
gr_fecs_method_push_adr_discover_zcull_image_size_v()
gr_fecs_method_push_adr_discover_preemption_image_size_v()

Without these methods sent to ctxsw, fecs will generate
host error interrupts indicating mismatches in ctxsw
image. Above fecs methods needs to be sent even if they
are already sent during golden context creation.

Bug 3109773

Change-Id: I2aeb92da8fa1961903ab95ef90f47906a1bb32c4
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2406685
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
da43acf639 gpu: nvgpu: execute early SM id config for each instance
Execute gops.gr.init.sm_id_config_early() for each GR instance with
nvgpu_gr_exec_with_ret_for_each_instance()

Jira NVGPU-5648

Change-Id: I7023ed5c7d65d43eb7bb8384617464a39c846f56
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2408419
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Peter Daifuku
dac7c587e9 nvgpu: don't unmap unallocated global ctx buffers
In nvgpu_gr_ctx_unmap_global_ctx_buffers(), don't unmap
buffers that were never allocated.

Issue warning in nvgpu_gmmu_do_update_page_table() if unmapping and
virt_addr is 0.

Bug 200648688
Bug 3093183

Change-Id: Ia2cb5f40bbb6c35575705571eb8c900f4495d58e
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2408315
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
fc12a284bf gpu: nvgpu: initialize per GR instance config
Expose below two new APIs from common.grmgr unit
nvgpu_grmgr_get_gr_num_gpcs() - get per instance number of GPCs
nvgpu_grmgr_get_gr_gpc_phys_id() - get physical GPC id for MIG engine
local id in corresponding instance

Execute gr_init_config() for each GR instance.
Add gr_config_init_mig_gpcs() to initialize GPC data in case MIG is
enabled. Separate out gr_config_init_gpcs() for legacy GPC data
initialization.

These functions will inititialize below data in struct nvgpu_gr_config:
max_gpc_count
gpc_count
gpc_mask
gpc_tpc_mask[gpc_count]
max_tpc_per_gpc_count

Rest of the values in struct nvgpu_gr_config are either based on above
values, or read from HW after setting GPC PRI window.

In gr_config_alloc_struct_mem(), rename total_gpc_cnt to total_tpc_cnt
since it represents total TPC count and not GPC. Remove use of temp3
variable since it does not give any idea on usage.

Jira NVGPU-5648

Change-Id: I646cac2ddc312e72b241b1b2a0e51a5cce141535
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2406390
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
002edb782a gpu: nvgpu: move cur_gr_instance tracking to MIG infra
Move cur_gr_instance from struct gk20a to struct nvgpu_mig since this
tracking is really MIG specific.

Jira NVGPU-5648

Change-Id: I27b124925c2291e352ef9456c7189da0bc447a42
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2406389
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Seshendra Gadagottu
a6d7b48665 gpu: nvgpu: sim: avoid memory leak with sim buffers
Sim buffers are getting allocated in nvgpu_sim_init_late,
which is called during each rail gate exit.
Sim buffers are getting de-allocated with nvgpu_free_sim_support,
which is getting called with module exit only.
So, to avoid memory leaks allocate sim buffers
only if there are not already allocated.

Jira NVGPU-6047

Change-Id: I7463866f9cb317aac43ad1d81f82f63ca301d38a
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2407637
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Alex Waterman
2b48aa5b0c gpu: nvgpu: Add device for_each macro
Add a macro to iterate over a device list; it is just a wrapper to
the nvgpu_list_for_each() macro. It lets code iterate over the
list of detected devices without being aware of the underlying
instance IDs.

This also removes the need to do a separate nvgpu_device_get()
and subsequent NULL checking. This will reduce overhead for
unit testing!

Change-Id: If41dbee30a743d29ab62ce930a819160265b9351
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2404914
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Tejal Kudav
b269aae9f2 gpu: nvgpu: correct usage of pbdma_id
The pbdma_id field stored in struct nvgpu_device is bitmask and not
bit position as implied by the name. This field is incorrectly used as
bit position in nvgpu_engine_disable_activity(), causing PRI timeout
errors during iGPU and dGPU shutdown path.

PRI timeout errors-
nvgpu: 17000000.gv11b                  gk20a_ptimer_isr:54   [ERR]
PRI timeout: ADR 0x0000308c READ  DATA 0x00000000

Here the pbdma_id stored in struct nvgpu_device for runlist_0 on
gv11b is 0x3(bitmask corresponding to PBDMA_0 and PBDMA_1).
nvgpu_engine_disable_activity() interprets this as PBDMA_3 and adds
incorrect offset to access PBDMA_STATUS register, causing PRI error.

Modify nvgpu_engine_disable_activity() to treat pbdma_id as bitmask
and loop through set bits.

JIRA NVGPU-5991

Change-Id: Iaffb974cddaa375a329e70f3b5903b9ef2a222c4
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2397954
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Seshendra Gadagottu
41057dac58 gpu: nvgpu: netlist: fix memory leak with region info
During dynamic netlist detection, before switching to new
netlist, previous netlist region info needs be released
cleanly. Similarly, during netlist_deinit, all region info
data needs to be released.

JIRA NVGPU-6044

Change-Id: Iacc2ab160dc9ec57c3ca8646bda9e2a5d9b38e98
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2405173
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
717921a274 gpu: nvgpu: return intr mask of all GR engine instances
nvgpu_gr_engine_interrupt_mask() earlier returned mask of all GR engine
instance interrupts. During device refactor series, this got changed to
return interrupt of only first instance.

Change this again to return interrupt mask of all the GR engine
instances since common.mc unit does not yet support APIs to enable
interrupt of individual GR instance.

Update nvgpu_gr_get_syspipe_id() API to take gr_instance_id as parameter
instead of struct nvgpu_gr pointer. Definition of struct nvgpu_gr is not
available outside of common.gr unit.

Jira NVGPU-5648

Change-Id: I5320d1515eea6054150dc14706a16475bd650da7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2405409
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
35fabed1e8 gpu: nvgpu: execute gr_init_prepare_hw() for each instance
Rename gr_init_reset_enable_hw() to gr_init_prepare_hw() since this
function does not actually do reset, but just prepares the HW
after reset for other SW/HW initialization.

Add a new function gr_init_prepare_hw_impl() that executes per-instance
sequence to prepare GR hardware. Execute this inside
nvgpu_gr_exec_with_ret_for_each_instance().

Note that enabling GR engine interrupts in MC is still expected to
be done in one shot hence keep that code outside of
gr_init_prepare_hw_impl()

Remove redundant calls to gops.gr.init.fifo_access() and
enable_gr_interrupts() from gr_init_setup_hw().
gr_init_prepare_hw() does this already and executes before
gr_init_setup_hw()

Jira NVGPU-5648

Change-Id: If0b7207f80c2fb00d894afebce04b06b7b61d432
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2405408
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
ebb65b6eae gpu: nvgpu: fix nvgpu_grmgr_get_gr_syspipe_id
API nvgpu_grmgr_get_gr_syspipe_id() right now traverses all the GPU
instances to find requested gr_instance_id. But logically,
gr_instance_id is always going to be same as gpu_instance_id since
nvgpu only supports one GR engine instace per GPU instance.

Fix this function by extracting GPU instance based on gr_instance_id
and then fetching syspipe_id stored for that GPU instance.

Jira NVGPU-5648

Change-Id: Ie7b86d765006353d0571e786a8089e7f75f779c3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2405406
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Deepak Nibade
6745b0685e gpu: nvgpu: support resetting each GR instance
Add a new header file <nvgpu/gr/gr_instances.h> that supports below
macros to execute various functions for GR instances

1) nvgpu_gr_exec_for_each_instance
   Execute a function for each GR instance by configuring GR remap
   window for that instance. Function being executed returns void.

2) nvgpu_gr_exec_with_ret_for_each_instance
   Execute a function for each GR instance by configuring GR remap
   window for that instance. Function being executed returns an error.

3) nvgpu_gr_exec_for_all_instances
   Execute a function for all GR instances at once. For this GR remap
   window needs to be disabled temporarily.

If CONFIG_NVGPU_MIG is disabled, all above macros will turn into simple
funciton calls.
If CONFIG_NVGPU_MIG is disabled or if runtime flag  NVGPU_SUPPORT_MIG is
disabled, all above macros will turn into simple function calls that
configure single GR instance.

Separate out GR engine reset code into new API gr_reset_engine() and
execute it with nvgpu_gr_exec_with_ret_for_each_instance().

PROD values need to be loaded in legacy mode, hence call
nvgpu_cg_init_gr_load_gating_prod() inside
nvgpu_gr_exec_for_all_instances().

Rename gr_init_prepare_hw() to more appropriate
gr_reset_hw_and_load_prod()

Moe gops.gr.init.fifo_access() call to gr_init_reset_enable_hw().

Add new API nvgpu_grmgr_get_gr_syspipe_id() to query GR instance syspipe
id from common.grmgr unit. Add nvgpu_gr_get_syspipe_id() that returns
same value stored in nvgpu_gr struct.

Add cur_gr_instance field to struct nvgpu_gr to track current GR
instance being programmed under remap window.

Jira NVGPU-5648

Change-Id: I86920303427a6e6547ebf195daa37438365bb38e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2403550
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
34c24873a7 gpu: nvgpu: trigger gpc reset from common.gr
GPC reset is right now triggered from common.mc unit for NVGPU_NEXT.
Move the triggers to common code in common.gr unit. This way it is much
more cleaner to handle multiple GR instances (added in subsequent patch)

Hardcode GR engine instance to 0 for now since by default there is only
one GR engine instance.

Jira NVGPU-5648

Change-Id: I3fd4d0a50db5a8c4b3decf1df881af323cea50c1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2403549
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Debarshi Dutta
38ce6fa717 gpu: nvgpu: change unnamed structs to named structs
Following changes are made in this patch.
1) Change unnamed structs within gpu_ops to named structs
with the prefix gops_*.

2) Each named struct gops_ are moved into a separate gops specific file
under include/nvgpu/gops/

3) struct gpu_ops is moved into a separate file include/nvgpu/gpu_ops.h
and all other dependent struct gops_* are included in this header.

4) Direct references to include/nvgpu/gops are removed from files as its enough
to include gk20a.h.

Change-Id: Ieb22cb853be567e3bef14f5f8a04674eebd902ea
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2398776
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
rmylavarapu
d0c01fc14c gpu: nvgpu: Support ELPG feature on nvgpu-next
Changes:
 -Implemented pg init_send ops for legacy chips.
 -Implemented RPC response handler.
 -Added pg rpc function call macros for nvgpu-next.

NVGPU-5192
NVGPU-5195
NVGPU-5196

Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Change-Id: I4e99d3929d7db796434aaeaa6f5773e9aac9fd32
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2391029
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
a2809088eb gpu: nvgpu: remove unnecessary hal gops.gr.gr_enable_hw()
gops.gr.gr_enable_hw() is a common function and not referred on vGPU.
Remove HAL pointer and directly use nvgpu_gr_enable_hw() instead.

Jira NVGPU-5648

Change-Id: Id031024ed01f9d890cffb5902cc433800810b219
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2403548
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
1117ea1286 gpu: nvgpu: ce: check address ranges before exec
The source and destination addresses are masked to low 40 bits only.
Make sure that the input params don't cross that; it would mean a bug
somewhere in the caller side. Silently truncating values could cause
unexpected behaviour, but no device even has that much memory.

Also rename the src_buf and dst_buf to src_paddr and dst_paddr to
emphasize that the addresses are gpu physical.

Jira NVGPU-5172

Change-Id: I30653bf93791517991d04e4ba43220b5b541f581
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2402031
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
aafc9a4511 gpu: nvgpu: ce: move exec input checks up
Check the sanity of some input arguments already as the first thing so
that a better error code can be returned.

Jira NVGPU-5172

Change-Id: I1c847c10166471e520d0e9aaeeef606bd7d8634e
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2402030
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
82b4a8e825 gpu: nvgpu: ce: allocate exact cmdbuf size
Avoid the magic value 256 by basing the constant max cmdbuf bytes per
submit on the actual data used in the submits. Each submit contains a
setclass header and at most two transfer or memset operations.

Jira NVGPU-5172

Change-Id: I66d715fe5e7fcfc676c0d78a3cf35c2c6197a342
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2402029
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
a54e4f1d74 gpu: nvgpu: ce: use clear upper bound for op size
The copyengine code to do big transfers or memsets supports a 64-bit
size. Each copy is done as a rectangle with either side being at most
2GB, so a size that does not align nicely is split into multiple ops. It
turns out that there are at most two of these ops, so structure the code
to not loop but do two ops explicitly.

The first copyengine operation works with the first chunk that is less
than two gigabytes long. That leaves the remaining size to be a multiple
of two gigabytes, so it's sufficient to do just another operation as a
2D rectangle whose width is two gigabytes; the remaining size determines
the height, i.e. the number of two-gig lines.

The loop did just this already, but now with at most two operations per
submit the required pushbuf length is seen more easily from the code.

Jira NVGPU-5172

Change-Id: I6bca3b1204db3b79e131898c07018a1337d85774
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2402028
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
4351978013 gpu: nvgpu: ce: make payload param u32
The payload word used for copyengine memsets is written to an u32
buffer, so use the correct type from the beginning.

Jira NVGPU-5172

Change-Id: Id813e042b609cb9d0705ba32d3cc03351bded413
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2402027
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
611ad23bde gpu: nvgpu: move channel worker and wdt
Continue making the incoherent channel functionality more structured by
moving the worker thread business to one file and the channel watchdog
logic to another. This is channel-internal restructuring; the interface
to other units does not change.

The watchdog logic is called from the worker thread and as such these
are rather tightly coupled but it's possible to have the thread and not
the watchdog.

Jira NVGPU-5582

Change-Id: I70f334dd15c9aca0eed75393b99e2f080d133015
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2398921
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
2427d45102 gpu: nvgpu: initialize gr ecc counters for each instance
Add new API nvgpu_ecc_counter_init_per_gr() to initialize ECC counters
per GR instance.
Switch NVGPU_ECC_COUNTER_INIT_GR macro to use
nvgpu_ecc_counter_init_per_gr() instead of nvgpu_ecc_counter_init().

Fix error handling path in nvgpu_gr_alloc().

Jira NVGPU-5648

Change-Id: I18f1bf8b245956bdb5a3e4bb6b03114282366ce6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2402025
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Konsta Hölttä
5e570610b3 gpu: nvgpu: track syncpt max internally
The max values that the Linux nvhost driver tracks are adding some
complexity to our wrapper APIs. Max values are used only for internal
submit syncpoint tracking, so implement that tracking in the sync code
by just storing the last value that the syncpoing will reach after all
jobs are complete.

The value is a simple u32. It's accessed from functions in the submit
path that already is serialized, so there's no worrying about atomic
modifications.

Previously nvhost_syncpt_set_min_eq_max_ext() was used to reset the
syncpoint when necessary. Now with the internal max value we'll use
nvhost_syncpt_set_minval(), so add a wrapper for it.

The maxval reported with the user syncpoint allocation is just the
current value at allocation time since no jobs have affected it yet;
there is no means for the kernel to track the max value of user
syncpoints.

Jira NVGPU-5506

Change-Id: I34672eaa7fe3af36b2fbac92d11babe2bc6a2d2b
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2400635
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
b062081c52 gpu: nvgpu: add function for prealloc job release
The last steps to finish job cleanup for both deterministic and
nondeterministic submits are the same: put away preallocated job
resources that the job had consumed. Avoid duplicated code by moving
this code to a function that's shared with both paths.

Jira NVGPU-5998

Change-Id: Ic278b0bc8f0f05895f5c24340a60c1ce3eade0b3
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2401468
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
d4fb476e70 gpu: nvgpu: remove joblist cleanup lock
The joblist cleanup lock exists to synchronize the submit job cleanup
and the abort cleanup that may run in separate threads concurrently.
This concurrency is no problem anymore, so delete the lock.

The lock was added in commit f1072a28be ("gpu: nvgpu: add worker for
watchdog and job cleanup") when the abort cleanup still went through
each job in the pending list and released their semaphores; ordinary job
cleanup from the worker thread also accesses the jobs. Commit d20a501dcb
("gpu: nvgpu: simplify job semaphore release in abort") deleted the
entire loop because the semaphore, if any, is now reset in one go (via
the "set_min_eq_max" ch sync op), but the lock stayed.

With aggressive sync destroy enabled the sync object under the cleanup
lock can still disappear if the job cleanup runs, but that's already
guarded with the sync lock.

Jira NVGPU-5998

Change-Id: I6554eb2065b003c6fdf83f66f97067b59aa272f5
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2401467
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
8cccb49bd2 gpu: nvgpu: collapse nvgpu_gr_prepare_sw into nvgpu_gr_alloc
common.gr unit exports a separate API nvgpu_gr_prepare_sw to
initialize some SW pieces required for nvgpu_gr_enable_hw().
A separate API is really unnecessary since same initialization
can be performed in nvgpu_gr_alloc().

Remove nvgpu_gr_prepare_sw() and HAL gops.gr.gr_prepare_sw().
Initialize falcon and interrupt structures in loop from
nvgpu_gr_alloc().

Move nvgpu_netlist_init_ctx_vars() from nvgpu_gr_prepare_sw() to
common init path since netlist parsing need not be done from
common.gr unit. It just needs to happen before nvgpu_gr_enable_hw().

Also, trigger nvgpu_gr_free() from gr_remove_support() instead
of OS specific paths. Also remove nvgpu_gr_free() calls from
probe error paths since nvgpu_gr_alloc is no longer called in
probe path.

Move interrupt and falcon data structure free calls to nvgpu_gr_free().

Also remove corresponding unit testing code that tests
nvgpu_gr_prepare_sw() specifically.
Update some unit tests to initialize ecc counters and netlist.
Disable some unit tests that fail for reasons unknown.

Jira NVGPU-5648

Change-Id: I82ec8160f76530bc40e0c11a9f26ba1c8f9cf643
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2400166
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Deepak Nibade
cfa360f5b8 gpu: nvgpu: allocate struct nvgpu_gr based on enumerated gr count
Add new API nvgpu_grmgr_get_num_gr_instances() that returns number of
GR instance enumerated by GR manager. This just returns number of sys
pipes enabled since it is same as number of GR instances.

For consistency until common.gr supports multiple GR instances
completely, add a temporary macro NVGPU_GR_NUM_INSTANCES and set it
to 1. If this macro is changed to 0 (for local MIG testing), fall
back to use nvgpu_grmgr_get_num_gr_instances() to get enumerated number
of GR instances.

Use a for loop to initialize other variables of struct nvgpu_gr.

Remove unnecessary NULL check in nvgpu_gr_alloc() since struct gk20a
pointer can never be NULL in this path. Also remove corresponding unit
test code.

Jira NVGPU-5648

Change-Id: Id151d634a23235381229044f2a9af89e390886f2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2400151
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00