1
0
mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git synced 2026-01-11 09:00:12 +00:00

drm fixes for 6.19-rc5

MAINTAINERS:
 - Fix Nova GPU driver git links.
 - Fix typo in TYR driver entry preventing correct behavior of
   scripts/get_maintainer.pl.
 - Exclude TYR driver from DRM MISC.
 
 nova-core:
 - Correctly select RUST_FW_LOADER_ABSTRACTIONS to prevent build
   errors.
 - Regenerate nova-core bindgen bindings with '--explicit-padding' to
   avoid uninitialized bytes.
 - Fix length of received GSP messages, due to miscalculated message
   payload size.
 - Regenerate bindings to derive MaybeZeroable.
 - Use a bindings alias to derive the firmware version.
 
 exynos:
 - hdmi: replace system_wq with system_percpu_wq
 
 pl111:
 - Fix error handling in probe
 
 mediatek/atomic/tidss:
 - Fix tidss in another way and revert reordering of pre-enable and post-disable operations,
   as it breaks other bridge drivers.
 
 nouveau:
 - Fix regression from fwsec s/r fix.
 
 pci/vga:
 - Fix multiple gpu's being reported a 'boot_display'
 
 fb-helper:
 - Fix vblank timeout during suspend/reset
 
 amdgpu:
 - Clang fixes
 - Navi1x PCIe DPM fixes
 - Ring reset fixes
 - ISP suspend fix
 - Analog DC fixes
 - VPE fixes
 - Mode1 reset fix
 
 radeon:
 - Variable sized array fix
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmlglnMACgkQDHTzWXnE
 hr7jcg//cUeuoY47qg5o5jpJOjhgjDkj6+5OWv5TBZBZJRNtB0FphEa12pV+V6FY
 c4efvX6eB5Y+36067vDdNdV4eVfKIX4wSUCD07IpqNotwfQB+YpVXFS1wTyDYk/J
 VHOkChiqZHx4LGhIfqPQCatIo3H+h/xRpp6/AcAxWw2IKSendla9qSNAh1MHDDfU
 LEGp9RE/nbhMmAXw8njRV8qIjLV9jsPq4FnGTqOD7V4X/gt/t2g3goKnbQ2xrAY+
 dAa37rhYi6I0IeD1nJmnmBvNulP/5bMSzvse53kQNpphkv+DtSyjW1WSXLI4CV7w
 xJIAz4NV62qHqqahi9MFRWSn0A/tphjbOVj4qQ1E5Y1LObX4MP63b5elwbQ6GFbo
 pUroDdIW1ko92pQU27tjslDqDIo0TbUJHBjaSRKe7WTOrfgOzDExpg7UbSqPRUxt
 ConvZupRcrzUxspiVJi9HejFDC2QS1uwyoEmLhjHlBHLKSrtc0xHTDZrVc7BrfoM
 EdrCT/tgEvOWjJOYH/HuInaKuaXXuWwKBuwjhETTqUvR8+xmPBVgIAMrxox1gJAf
 p0UfM6y7AKsWeXpdN9+FBKOpvpXMst2dKjJ83k53Z3HBNp+5jE7f9VMp085u46JO
 dOnIsUd7wtILpTkpGUGG4AAmoTlhaynO+vlB4xwJRW3B4eydAkM=
 =kmaJ
 -----END PGP SIGNATURE-----

Merge tag 'drm-fixes-2026-01-09' of https://gitlab.freedesktop.org/drm/kernel

Pull drm fixes from Dave Airlie:
 "I missed the drm-rust fixes tree for last week, so this catches up on
  that, along with amdgpu, and then some misc fixes across a few
  drivers. I hadn't got an xe pull by the time I sent this, I suspect
  one will arrive 10 mins after, but I don't think there is anything
  that can't wait for next week.

  Things seem to have picked up a little with people coming back from
  holidays,

  MAINTAINERS:
   - Fix Nova GPU driver git links
   - Fix typo in TYR driver entry preventing correct behavior of
     scripts/get_maintainer.pl
   - Exclude TYR driver from DRM MISC

  nova-core:
   - Correctly select RUST_FW_LOADER_ABSTRACTIONS to prevent build
     errors
   - Regenerate nova-core bindgen bindings with '--explicit-padding' to
     avoid uninitialized bytes
   - Fix length of received GSP messages, due to miscalculated message
     payload size
   - Regenerate bindings to derive MaybeZeroable
   - Use a bindings alias to derive the firmware version

  exynos:
   - hdmi: replace system_wq with system_percpu_wq

  pl111:
   - Fix error handling in probe

  mediatek/atomic/tidss:
   - Fix tidss in another way and revert reordering of pre-enable and
     post-disable operations, as it breaks other bridge drivers

  nouveau:
   - Fix regression from fwsec s/r fix

  pci/vga:
   - Fix multiple gpu's being reported a 'boot_display'

  fb-helper:
   - Fix vblank timeout during suspend/reset

  amdgpu:
   - Clang fixes
   - Navi1x PCIe DPM fixes
   - Ring reset fixes
   - ISP suspend fix
   - Analog DC fixes
   - VPE fixes
   - Mode1 reset fix

  radeon:
   - Variable sized array fix"

* tag 'drm-fixes-2026-01-09' of https://gitlab.freedesktop.org/drm/kernel: (32 commits)
  Reapply "Revert "drm/amd: Skip power ungate during suspend for VPE""
  drm/amd/display: Check NULL before calling dac_load_detection
  drm/amd/pm: Disable MMIO access during SMU Mode 1 reset
  drm/exynos: hdmi: replace use of system_wq with system_percpu_wq
  drm/fb-helper: Fix vblank timeout during suspend/reset
  PCI/VGA: Don't assume the only VGA device on a system is `boot_vga`
  drm/amdgpu: Fix query for VPE block_type and ip_count
  drm/amd/display: Add missing encoder setup to DACnEncoderControl
  drm/amd/display: Correct color depth for SelectCRTC_Source
  drm/amd/amdgpu: Fix SMU warning during isp suspend-resume
  drm/amdgpu: always backup and reemit fences
  drm/amdgpu: don't reemit ring contents more than once
  drm/amd/pm: force send pcie parmater on navi1x
  drm/amd/pm: fix wrong pcie parameter on navi1x
  drm/radeon: Remove __counted_by from ClockInfoArray.clockInfo[]
  drm/amd/display: Reduce number of arguments of dcn30's CalculateWatermarksAndDRAMSpeedChangeSupport()
  drm/amd/display: Reduce number of arguments of dcn30's CalculatePrefetchSchedule()
  drm/amd/display: Apply e4479aecf658 to dml
  nouveau: don't attempt fwsec on sb on newer platforms
  drm/tidss: Fix enable/disable order
  ...
This commit is contained in:
Linus Torvalds 2026-01-09 06:04:05 -10:00
commit cbd4480cfa
41 changed files with 724 additions and 813 deletions

View File

@ -2159,7 +2159,7 @@ M: Alice Ryhl <aliceryhl@google.com>
L: dri-devel@lists.freedesktop.org
S: Supported
W: https://rust-for-linux.com/tyr-gpu-driver
W https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html
W: https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html
B: https://gitlab.freedesktop.org/panfrost/linux/-/issues
T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml
@ -8068,7 +8068,7 @@ W: https://rust-for-linux.com/nova-gpu-driver
Q: https://patchwork.freedesktop.org/project/nouveau/
B: https://gitlab.freedesktop.org/drm/nova/-/issues
C: irc://irc.oftc.net/nouveau
T: git https://gitlab.freedesktop.org/drm/nova.git nova-next
T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next
F: Documentation/gpu/nova/
F: drivers/gpu/nova-core/
@ -8080,7 +8080,7 @@ W: https://rust-for-linux.com/nova-gpu-driver
Q: https://patchwork.freedesktop.org/project/nouveau/
B: https://gitlab.freedesktop.org/drm/nova/-/issues
C: irc://irc.oftc.net/nouveau
T: git https://gitlab.freedesktop.org/drm/nova.git nova-next
T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next
F: Documentation/gpu/nova/
F: drivers/gpu/drm/nova/
F: include/uapi/drm/nova_drm.h
@ -8358,6 +8358,7 @@ X: drivers/gpu/drm/msm/
X: drivers/gpu/drm/nova/
X: drivers/gpu/drm/radeon/
X: drivers/gpu/drm/tegra/
X: drivers/gpu/drm/tyr/
X: drivers/gpu/drm/xe/
DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST]

View File

@ -3445,11 +3445,10 @@ int amdgpu_device_set_pg_state(struct amdgpu_device *adev,
(adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX ||
adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA))
continue;
/* skip CG for VCE/UVD/VPE, it's handled specially */
/* skip CG for VCE/UVD, it's handled specially */
if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD &&
adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCE &&
adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCN &&
adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VPE &&
adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_JPEG &&
adev->ip_blocks[i].version->funcs->set_powergating_state) {
/* enable powergating to save power */
@ -5867,6 +5866,9 @@ int amdgpu_device_mode1_reset(struct amdgpu_device *adev)
if (ret)
goto mode1_reset_failed;
/* enable mmio access after mode 1 reset completed */
adev->no_hw_access = false;
amdgpu_device_load_pci_state(adev->pdev);
ret = amdgpu_psp_wait_for_bootloader(adev);
if (ret)

View File

@ -89,6 +89,16 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
return seq;
}
static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af)
{
af->fence_wptr_start = af->ring->wptr;
}
static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af)
{
af->fence_wptr_end = af->ring->wptr;
}
/**
* amdgpu_fence_emit - emit a fence on the requested ring
*
@ -116,8 +126,10 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct amdgpu_fence *af,
&ring->fence_drv.lock,
adev->fence_context + ring->idx, seq);
amdgpu_fence_save_fence_wptr_start(af);
amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
seq, flags | AMDGPU_FENCE_FLAG_INT);
amdgpu_fence_save_fence_wptr_end(af);
amdgpu_fence_save_wptr(af);
pm_runtime_get_noresume(adev_to_drm(adev)->dev);
ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask];
@ -709,6 +721,7 @@ void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af)
struct amdgpu_ring *ring = af->ring;
unsigned long flags;
u32 seq, last_seq;
bool reemitted = false;
last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask;
seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask;
@ -726,7 +739,9 @@ void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af)
if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) {
fence = container_of(unprocessed, struct amdgpu_fence, base);
if (fence == af)
if (fence->reemitted > 1)
reemitted = true;
else if (fence == af)
dma_fence_set_error(&fence->base, -ETIME);
else if (fence->context == af->context)
dma_fence_set_error(&fence->base, -ECANCELED);
@ -734,9 +749,12 @@ void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af)
rcu_read_unlock();
} while (last_seq != seq);
spin_unlock_irqrestore(&ring->fence_drv.lock, flags);
/* signal the guilty fence */
amdgpu_fence_write(ring, (u32)af->base.seqno);
amdgpu_fence_process(ring);
if (reemitted) {
/* if we've already reemitted once then just cancel everything */
amdgpu_fence_driver_force_completion(af->ring);
af->ring->ring_backup_entries_to_copy = 0;
}
}
void amdgpu_fence_save_wptr(struct amdgpu_fence *af)
@ -784,10 +802,18 @@ void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring,
/* save everything if the ring is not guilty, otherwise
* just save the content from other contexts.
*/
if (!guilty_fence || (fence->context != guilty_fence->context))
if (!fence->reemitted &&
(!guilty_fence || (fence->context != guilty_fence->context))) {
amdgpu_ring_backup_unprocessed_command(ring, wptr,
fence->wptr);
} else if (!fence->reemitted) {
/* always save the fence */
amdgpu_ring_backup_unprocessed_command(ring,
fence->fence_wptr_start,
fence->fence_wptr_end);
}
wptr = fence->wptr;
fence->reemitted++;
}
rcu_read_unlock();
} while (last_seq != seq);

View File

@ -318,12 +318,36 @@ void isp_kernel_buffer_free(void **buf_obj, u64 *gpu_addr, void **cpu_addr)
}
EXPORT_SYMBOL(isp_kernel_buffer_free);
static int isp_resume(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
struct amdgpu_isp *isp = &adev->isp;
if (isp->funcs->hw_resume)
return isp->funcs->hw_resume(isp);
return -ENODEV;
}
static int isp_suspend(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
struct amdgpu_isp *isp = &adev->isp;
if (isp->funcs->hw_suspend)
return isp->funcs->hw_suspend(isp);
return -ENODEV;
}
static const struct amd_ip_funcs isp_ip_funcs = {
.name = "isp_ip",
.early_init = isp_early_init,
.hw_init = isp_hw_init,
.hw_fini = isp_hw_fini,
.is_idle = isp_is_idle,
.suspend = isp_suspend,
.resume = isp_resume,
.set_clockgating_state = isp_set_clockgating_state,
.set_powergating_state = isp_set_powergating_state,
};

View File

@ -38,6 +38,8 @@ struct amdgpu_isp;
struct isp_funcs {
int (*hw_init)(struct amdgpu_isp *isp);
int (*hw_fini)(struct amdgpu_isp *isp);
int (*hw_suspend)(struct amdgpu_isp *isp);
int (*hw_resume)(struct amdgpu_isp *isp);
};
struct amdgpu_isp {

View File

@ -201,6 +201,9 @@ static enum amd_ip_block_type
type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ?
AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN;
break;
case AMDGPU_HW_IP_VPE:
type = AMD_IP_BLOCK_TYPE_VPE;
break;
default:
type = AMD_IP_BLOCK_TYPE_NUM;
break;
@ -721,6 +724,9 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
case AMD_IP_BLOCK_TYPE_UVD:
count = adev->uvd.num_uvd_inst;
break;
case AMD_IP_BLOCK_TYPE_VPE:
count = adev->vpe.num_instances;
break;
/* For all other IP block types not listed in the switch statement
* the ip status is valid here and the instance count is one.
*/

View File

@ -144,10 +144,15 @@ struct amdgpu_fence {
struct amdgpu_ring *ring;
ktime_t start_timestamp;
/* wptr for the fence for resets */
/* wptr for the total submission for resets */
u64 wptr;
/* fence context for resets */
u64 context;
/* has this fence been reemitted */
unsigned int reemitted;
/* wptr for the fence for the submission */
u64 fence_wptr_start;
u64 fence_wptr_end;
};
extern const struct drm_sched_backend_ops amdgpu_sched_ops;

View File

@ -26,6 +26,7 @@
*/
#include <linux/gpio/machine.h>
#include <linux/pm_runtime.h>
#include "amdgpu.h"
#include "isp_v4_1_1.h"
@ -145,6 +146,9 @@ static int isp_genpd_add_device(struct device *dev, void *data)
return -ENODEV;
}
/* The devices will be managed by the pm ops from the parent */
dev_pm_syscore_device(dev, true);
exit:
/* Continue to add */
return 0;
@ -177,12 +181,47 @@ static int isp_genpd_remove_device(struct device *dev, void *data)
drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret);
return -ENODEV;
}
dev_pm_syscore_device(dev, false);
exit:
/* Continue to remove */
return 0;
}
static int isp_suspend_device(struct device *dev, void *data)
{
return pm_runtime_force_suspend(dev);
}
static int isp_resume_device(struct device *dev, void *data)
{
return pm_runtime_force_resume(dev);
}
static int isp_v4_1_1_hw_suspend(struct amdgpu_isp *isp)
{
int r;
r = device_for_each_child(isp->parent, NULL,
isp_suspend_device);
if (r)
dev_err(isp->parent, "failed to suspend hw devices (%d)\n", r);
return r;
}
static int isp_v4_1_1_hw_resume(struct amdgpu_isp *isp)
{
int r;
r = device_for_each_child(isp->parent, NULL,
isp_resume_device);
if (r)
dev_err(isp->parent, "failed to resume hw device (%d)\n", r);
return r;
}
static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp)
{
const struct software_node *amd_camera_node, *isp4_node;
@ -369,6 +408,8 @@ static int isp_v4_1_1_hw_fini(struct amdgpu_isp *isp)
static const struct isp_funcs isp_v4_1_1_funcs = {
.hw_init = isp_v4_1_1_hw_init,
.hw_fini = isp_v4_1_1_hw_fini,
.hw_suspend = isp_v4_1_1_hw_suspend,
.hw_resume = isp_v4_1_1_hw_resume,
};
void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)

View File

@ -763,14 +763,14 @@ static enum bp_result bios_parser_encoder_control(
return BP_RESULT_FAILURE;
return bp->cmd_tbl.dac1_encoder_control(
bp, cntl->action == ENCODER_CONTROL_ENABLE,
bp, cntl->action,
cntl->pixel_clock, ATOM_DAC1_PS2);
} else if (cntl->engine_id == ENGINE_ID_DACB) {
if (!bp->cmd_tbl.dac2_encoder_control)
return BP_RESULT_FAILURE;
return bp->cmd_tbl.dac2_encoder_control(
bp, cntl->action == ENCODER_CONTROL_ENABLE,
bp, cntl->action,
cntl->pixel_clock, ATOM_DAC1_PS2);
}

View File

@ -1797,7 +1797,30 @@ static enum bp_result select_crtc_source_v3(
&params.ucEncodeMode))
return BP_RESULT_BADINPUT;
params.ucDstBpc = bp_params->bit_depth;
switch (bp_params->color_depth) {
case COLOR_DEPTH_UNDEFINED:
params.ucDstBpc = PANEL_BPC_UNDEFINE;
break;
case COLOR_DEPTH_666:
params.ucDstBpc = PANEL_6BIT_PER_COLOR;
break;
default:
case COLOR_DEPTH_888:
params.ucDstBpc = PANEL_8BIT_PER_COLOR;
break;
case COLOR_DEPTH_101010:
params.ucDstBpc = PANEL_10BIT_PER_COLOR;
break;
case COLOR_DEPTH_121212:
params.ucDstBpc = PANEL_12BIT_PER_COLOR;
break;
case COLOR_DEPTH_141414:
dm_error("14-bit color not supported by SelectCRTC_Source v3\n");
break;
case COLOR_DEPTH_161616:
params.ucDstBpc = PANEL_16BIT_PER_COLOR;
break;
}
if (EXEC_BIOS_CMD_TABLE(SelectCRTC_Source, params))
result = BP_RESULT_OK;
@ -1815,12 +1838,12 @@ static enum bp_result select_crtc_source_v3(
static enum bp_result dac1_encoder_control_v1(
struct bios_parser *bp,
bool enable,
enum bp_encoder_control_action action,
uint32_t pixel_clock,
uint8_t dac_standard);
static enum bp_result dac2_encoder_control_v1(
struct bios_parser *bp,
bool enable,
enum bp_encoder_control_action action,
uint32_t pixel_clock,
uint8_t dac_standard);
@ -1846,12 +1869,15 @@ static void init_dac_encoder_control(struct bios_parser *bp)
static void dac_encoder_control_prepare_params(
DAC_ENCODER_CONTROL_PS_ALLOCATION *params,
bool enable,
enum bp_encoder_control_action action,
uint32_t pixel_clock,
uint8_t dac_standard)
{
params->ucDacStandard = dac_standard;
if (enable)
if (action == ENCODER_CONTROL_SETUP ||
action == ENCODER_CONTROL_INIT)
params->ucAction = ATOM_ENCODER_INIT;
else if (action == ENCODER_CONTROL_ENABLE)
params->ucAction = ATOM_ENABLE;
else
params->ucAction = ATOM_DISABLE;
@ -1864,7 +1890,7 @@ static void dac_encoder_control_prepare_params(
static enum bp_result dac1_encoder_control_v1(
struct bios_parser *bp,
bool enable,
enum bp_encoder_control_action action,
uint32_t pixel_clock,
uint8_t dac_standard)
{
@ -1873,7 +1899,7 @@ static enum bp_result dac1_encoder_control_v1(
dac_encoder_control_prepare_params(
&params,
enable,
action,
pixel_clock,
dac_standard);
@ -1885,7 +1911,7 @@ static enum bp_result dac1_encoder_control_v1(
static enum bp_result dac2_encoder_control_v1(
struct bios_parser *bp,
bool enable,
enum bp_encoder_control_action action,
uint32_t pixel_clock,
uint8_t dac_standard)
{
@ -1894,7 +1920,7 @@ static enum bp_result dac2_encoder_control_v1(
dac_encoder_control_prepare_params(
&params,
enable,
action,
pixel_clock,
dac_standard);

View File

@ -57,12 +57,12 @@ struct cmd_tbl {
struct bp_crtc_source_select *bp_params);
enum bp_result (*dac1_encoder_control)(
struct bios_parser *bp,
bool enable,
enum bp_encoder_control_action action,
uint32_t pixel_clock,
uint8_t dac_standard);
enum bp_result (*dac2_encoder_control)(
struct bios_parser *bp,
bool enable,
enum bp_encoder_control_action action,
uint32_t pixel_clock,
uint8_t dac_standard);
enum bp_result (*dac1_output_control)(

View File

@ -30,7 +30,11 @@ dml_rcflags := $(CC_FLAGS_NO_FPU)
ifneq ($(CONFIG_FRAME_WARN),0)
ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y)
frame_warn_limit := 3072
ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy)
frame_warn_limit := 4096
else
frame_warn_limit := 3072
endif
else
frame_warn_limit := 2048
endif

View File

@ -77,32 +77,14 @@ static unsigned int dscceComputeDelay(
static unsigned int dscComputeDelay(
enum output_format_class pixelFormat,
enum output_encoder_class Output);
// Super monster function with some 45 argument
static bool CalculatePrefetchSchedule(
struct display_mode_lib *mode_lib,
double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData,
double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly,
unsigned int k,
Pipe *myPipe,
unsigned int DSCDelay,
double DPPCLKDelaySubtotalPlusCNVCFormater,
double DPPCLKDelaySCL,
double DPPCLKDelaySCLLBOnly,
double DPPCLKDelayCNVCCursor,
double DISPCLKDelaySubtotal,
unsigned int DPP_RECOUT_WIDTH,
enum output_format_class OutputFormat,
unsigned int MaxInterDCNTileRepeaters,
unsigned int VStartup,
unsigned int MaxVStartup,
unsigned int GPUVMPageTableLevels,
bool GPUVMEnable,
bool HostVMEnable,
unsigned int HostVMMaxNonCachedPageTableLevels,
double HostVMMinPageSize,
bool DynamicMetadataEnable,
bool DynamicMetadataVMEnabled,
int DynamicMetadataLinesBeforeActiveRequired,
unsigned int DynamicMetadataTransmittedBytes,
double UrgentLatency,
double UrgentExtraLatency,
double TCalc,
@ -116,7 +98,6 @@ static bool CalculatePrefetchSchedule(
unsigned int MaxNumSwathY,
double PrefetchSourceLinesC,
unsigned int SwathWidthC,
int BytePerPixelC,
double VInitPreFillC,
unsigned int MaxNumSwathC,
long swath_width_luma_ub,
@ -124,9 +105,6 @@ static bool CalculatePrefetchSchedule(
unsigned int SwathHeightY,
unsigned int SwathHeightC,
double TWait,
bool ProgressiveToInterlaceUnitInOPP,
double *DSTXAfterScaler,
double *DSTYAfterScaler,
double *DestinationLinesForPrefetch,
double *PrefetchBandwidth,
double *DestinationLinesToRequestVMInVBlank,
@ -135,14 +113,7 @@ static bool CalculatePrefetchSchedule(
double *VRatioPrefetchC,
double *RequiredPrefetchPixDataBWLuma,
double *RequiredPrefetchPixDataBWChroma,
bool *NotEnoughTimeForDynamicMetadata,
double *Tno_bw,
double *prefetch_vmrow_bw,
double *Tdmdl_vm,
double *Tdmdl,
unsigned int *VUpdateOffsetPix,
double *VUpdateWidthPix,
double *VReadyOffsetPix);
bool *NotEnoughTimeForDynamicMetadata);
static double RoundToDFSGranularityUp(double Clock, double VCOSpeed);
static double RoundToDFSGranularityDown(double Clock, double VCOSpeed);
static void CalculateDCCConfiguration(
@ -294,62 +265,23 @@ static void CalculateDynamicMetadataParameters(
static void CalculateWatermarksAndDRAMSpeedChangeSupport(
struct display_mode_lib *mode_lib,
unsigned int PrefetchMode,
unsigned int NumberOfActivePlanes,
unsigned int MaxLineBufferLines,
unsigned int LineBufferSize,
unsigned int DPPOutputBufferPixels,
unsigned int DETBufferSizeInKByte,
unsigned int WritebackInterfaceBufferSize,
double DCFCLK,
double ReturnBW,
bool GPUVMEnable,
unsigned int dpte_group_bytes[],
unsigned int MetaChunkSize,
double UrgentLatency,
double ExtraLatency,
double WritebackLatency,
double WritebackChunkSize,
double SOCCLK,
double DRAMClockChangeLatency,
double SRExitTime,
double SREnterPlusExitTime,
double DCFCLKDeepSleep,
unsigned int DPPPerPlane[],
bool DCCEnable[],
double DPPCLK[],
unsigned int DETBufferSizeY[],
unsigned int DETBufferSizeC[],
unsigned int SwathHeightY[],
unsigned int SwathHeightC[],
unsigned int LBBitPerPixel[],
double SwathWidthY[],
double SwathWidthC[],
double HRatio[],
double HRatioChroma[],
unsigned int vtaps[],
unsigned int VTAPsChroma[],
double VRatio[],
double VRatioChroma[],
unsigned int HTotal[],
double PixelClock[],
unsigned int BlendingAndTiming[],
double BytePerPixelDETY[],
double BytePerPixelDETC[],
double DSTXAfterScaler[],
double DSTYAfterScaler[],
bool WritebackEnable[],
enum source_format_class WritebackPixelFormat[],
double WritebackDestinationWidth[],
double WritebackDestinationHeight[],
double WritebackSourceHeight[],
enum clock_change_support *DRAMClockChangeSupport,
double *UrgentWatermark,
double *WritebackUrgentWatermark,
double *DRAMClockChangeWatermark,
double *WritebackDRAMClockChangeWatermark,
double *StutterExitWatermark,
double *StutterEnterPlusExitWatermark,
double *MinActiveDRAMClockChangeLatencySupported);
enum clock_change_support *DRAMClockChangeSupport);
static void CalculateDCFCLKDeepSleep(
struct display_mode_lib *mode_lib,
unsigned int NumberOfActivePlanes,
@ -810,29 +742,12 @@ static unsigned int dscComputeDelay(enum output_format_class pixelFormat, enum o
static bool CalculatePrefetchSchedule(
struct display_mode_lib *mode_lib,
double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData,
double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly,
unsigned int k,
Pipe *myPipe,
unsigned int DSCDelay,
double DPPCLKDelaySubtotalPlusCNVCFormater,
double DPPCLKDelaySCL,
double DPPCLKDelaySCLLBOnly,
double DPPCLKDelayCNVCCursor,
double DISPCLKDelaySubtotal,
unsigned int DPP_RECOUT_WIDTH,
enum output_format_class OutputFormat,
unsigned int MaxInterDCNTileRepeaters,
unsigned int VStartup,
unsigned int MaxVStartup,
unsigned int GPUVMPageTableLevels,
bool GPUVMEnable,
bool HostVMEnable,
unsigned int HostVMMaxNonCachedPageTableLevels,
double HostVMMinPageSize,
bool DynamicMetadataEnable,
bool DynamicMetadataVMEnabled,
int DynamicMetadataLinesBeforeActiveRequired,
unsigned int DynamicMetadataTransmittedBytes,
double UrgentLatency,
double UrgentExtraLatency,
double TCalc,
@ -846,7 +761,6 @@ static bool CalculatePrefetchSchedule(
unsigned int MaxNumSwathY,
double PrefetchSourceLinesC,
unsigned int SwathWidthC,
int BytePerPixelC,
double VInitPreFillC,
unsigned int MaxNumSwathC,
long swath_width_luma_ub,
@ -854,9 +768,6 @@ static bool CalculatePrefetchSchedule(
unsigned int SwathHeightY,
unsigned int SwathHeightC,
double TWait,
bool ProgressiveToInterlaceUnitInOPP,
double *DSTXAfterScaler,
double *DSTYAfterScaler,
double *DestinationLinesForPrefetch,
double *PrefetchBandwidth,
double *DestinationLinesToRequestVMInVBlank,
@ -865,15 +776,10 @@ static bool CalculatePrefetchSchedule(
double *VRatioPrefetchC,
double *RequiredPrefetchPixDataBWLuma,
double *RequiredPrefetchPixDataBWChroma,
bool *NotEnoughTimeForDynamicMetadata,
double *Tno_bw,
double *prefetch_vmrow_bw,
double *Tdmdl_vm,
double *Tdmdl,
unsigned int *VUpdateOffsetPix,
double *VUpdateWidthPix,
double *VReadyOffsetPix)
bool *NotEnoughTimeForDynamicMetadata)
{
struct vba_vars_st *v = &mode_lib->vba;
double DPPCLKDelaySubtotalPlusCNVCFormater = v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater;
bool MyError = false;
unsigned int DPPCycles = 0, DISPCLKCycles = 0;
double DSTTotalPixelsAfterScaler = 0;
@ -905,26 +811,26 @@ static bool CalculatePrefetchSchedule(
double Tdmec = 0;
double Tdmsks = 0;
if (GPUVMEnable == true && HostVMEnable == true) {
HostVMInefficiencyFactor = PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly;
HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels;
if (v->GPUVMEnable == true && v->HostVMEnable == true) {
HostVMInefficiencyFactor = v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly;
HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels;
} else {
HostVMInefficiencyFactor = 1;
HostVMDynamicLevelsTrips = 0;
}
CalculateDynamicMetadataParameters(
MaxInterDCNTileRepeaters,
v->MaxInterDCNTileRepeaters,
myPipe->DPPCLK,
myPipe->DISPCLK,
myPipe->DCFCLKDeepSleep,
myPipe->PixelClock,
myPipe->HTotal,
myPipe->VBlank,
DynamicMetadataTransmittedBytes,
DynamicMetadataLinesBeforeActiveRequired,
v->DynamicMetadataTransmittedBytes[k],
v->DynamicMetadataLinesBeforeActiveRequired[k],
myPipe->InterlaceEnable,
ProgressiveToInterlaceUnitInOPP,
v->ProgressiveToInterlaceUnitInOPP,
&Tsetup,
&Tdmbf,
&Tdmec,
@ -932,16 +838,16 @@ static bool CalculatePrefetchSchedule(
LineTime = myPipe->HTotal / myPipe->PixelClock;
trip_to_mem = UrgentLatency;
Tvm_trips = UrgentExtraLatency + trip_to_mem * (GPUVMPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1);
Tvm_trips = UrgentExtraLatency + trip_to_mem * (v->GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1);
if (DynamicMetadataVMEnabled == true && GPUVMEnable == true) {
*Tdmdl = TWait + Tvm_trips + trip_to_mem;
if (v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true) {
v->Tdmdl[k] = TWait + Tvm_trips + trip_to_mem;
} else {
*Tdmdl = TWait + UrgentExtraLatency;
v->Tdmdl[k] = TWait + UrgentExtraLatency;
}
if (DynamicMetadataEnable == true) {
if (VStartup * LineTime < Tsetup + *Tdmdl + Tdmbf + Tdmec + Tdmsks) {
if (v->DynamicMetadataEnable[k] == true) {
if (VStartup * LineTime < Tsetup + v->Tdmdl[k] + Tdmbf + Tdmec + Tdmsks) {
*NotEnoughTimeForDynamicMetadata = true;
} else {
*NotEnoughTimeForDynamicMetadata = false;
@ -949,39 +855,39 @@ static bool CalculatePrefetchSchedule(
dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf);
dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec);
dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks);
dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl);
dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]);
}
} else {
*NotEnoughTimeForDynamicMetadata = false;
}
*Tdmdl_vm = (DynamicMetadataEnable == true && DynamicMetadataVMEnabled == true && GPUVMEnable == true ? TWait + Tvm_trips : 0);
v->Tdmdl_vm[k] = (v->DynamicMetadataEnable[k] == true && v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true ? TWait + Tvm_trips : 0);
if (myPipe->ScalerEnabled)
DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCL;
DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCL;
else
DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCLLBOnly;
DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCLLBOnly;
DPPCycles = DPPCycles + myPipe->NumberOfCursors * DPPCLKDelayCNVCCursor;
DPPCycles = DPPCycles + myPipe->NumberOfCursors * v->DPPCLKDelayCNVCCursor;
DISPCLKCycles = DISPCLKDelaySubtotal;
DISPCLKCycles = v->DISPCLKDelaySubtotal;
if (myPipe->DPPCLK == 0.0 || myPipe->DISPCLK == 0.0)
return true;
*DSTXAfterScaler = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK
v->DSTXAfterScaler[k] = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK
+ DSCDelay;
*DSTXAfterScaler = *DSTXAfterScaler + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH;
v->DSTXAfterScaler[k] = v->DSTXAfterScaler[k] + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH;
if (OutputFormat == dm_420 || (myPipe->InterlaceEnable && ProgressiveToInterlaceUnitInOPP))
*DSTYAfterScaler = 1;
if (v->OutputFormat[k] == dm_420 || (myPipe->InterlaceEnable && v->ProgressiveToInterlaceUnitInOPP))
v->DSTYAfterScaler[k] = 1;
else
*DSTYAfterScaler = 0;
v->DSTYAfterScaler[k] = 0;
DSTTotalPixelsAfterScaler = *DSTYAfterScaler * myPipe->HTotal + *DSTXAfterScaler;
*DSTYAfterScaler = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1);
*DSTXAfterScaler = DSTTotalPixelsAfterScaler - ((double) (*DSTYAfterScaler * myPipe->HTotal));
DSTTotalPixelsAfterScaler = v->DSTYAfterScaler[k] * myPipe->HTotal + v->DSTXAfterScaler[k];
v->DSTYAfterScaler[k] = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1);
v->DSTXAfterScaler[k] = DSTTotalPixelsAfterScaler - ((double) (v->DSTYAfterScaler[k] * myPipe->HTotal));
MyError = false;
@ -990,33 +896,33 @@ static bool CalculatePrefetchSchedule(
Tvm_trips_rounded = dml_ceil(4.0 * Tvm_trips / LineTime, 1) / 4 * LineTime;
Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1) / 4 * LineTime;
if (GPUVMEnable) {
if (GPUVMPageTableLevels >= 3) {
*Tno_bw = UrgentExtraLatency + trip_to_mem * ((GPUVMPageTableLevels - 2) - 1);
if (v->GPUVMEnable) {
if (v->GPUVMMaxPageTableLevels >= 3) {
v->Tno_bw[k] = UrgentExtraLatency + trip_to_mem * ((v->GPUVMMaxPageTableLevels - 2) - 1);
} else
*Tno_bw = 0;
v->Tno_bw[k] = 0;
} else if (!myPipe->DCCEnable)
*Tno_bw = LineTime;
v->Tno_bw[k] = LineTime;
else
*Tno_bw = LineTime / 4;
v->Tno_bw[k] = LineTime / 4;
dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime
- (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal);
dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, v->Tdmdl[k])) / LineTime
- (v->DSTYAfterScaler[k] + v->DSTXAfterScaler[k] / myPipe->HTotal);
dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH
Lsw_oto = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC);
Tsw_oto = Lsw_oto * LineTime;
prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) / Tsw_oto;
prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) / Tsw_oto;
if (GPUVMEnable == true) {
Tvm_oto = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto,
if (v->GPUVMEnable == true) {
Tvm_oto = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto,
Tvm_trips,
LineTime / 4.0);
} else
Tvm_oto = LineTime / 4.0;
if ((GPUVMEnable == true || myPipe->DCCEnable == true)) {
if ((v->GPUVMEnable == true || myPipe->DCCEnable == true)) {
Tr0_oto = dml_max3(
(MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto,
LineTime - Tvm_oto, LineTime / 4);
@ -1042,10 +948,10 @@ static bool CalculatePrefetchSchedule(
dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf);
dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec);
dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks);
dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", *Tdmdl_vm);
dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl);
dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", *DSTXAfterScaler);
dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)*DSTYAfterScaler);
dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", v->Tdmdl_vm[k]);
dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]);
dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", v->DSTXAfterScaler[k]);
dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)v->DSTYAfterScaler[k]);
*PrefetchBandwidth = 0;
*DestinationLinesToRequestVMInVBlank = 0;
@ -1059,26 +965,26 @@ static bool CalculatePrefetchSchedule(
double PrefetchBandwidth3 = 0;
double PrefetchBandwidth4 = 0;
if (Tpre_rounded - *Tno_bw > 0)
if (Tpre_rounded - v->Tno_bw[k] > 0)
PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte
+ 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor
+ PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY
+ PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC)
/ (Tpre_rounded - *Tno_bw);
+ PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k])
/ (Tpre_rounded - v->Tno_bw[k]);
else
PrefetchBandwidth1 = 0;
if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw) > 0) {
PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw);
if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]) > 0) {
PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]);
}
if (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded > 0)
if (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded > 0)
PrefetchBandwidth2 = (PDEAndMetaPTEBytesFrame *
HostVMInefficiencyFactor + PrefetchSourceLinesY *
swath_width_luma_ub * BytePerPixelY +
PrefetchSourceLinesC * swath_width_chroma_ub *
BytePerPixelC) /
(Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded);
v->BytePerPixelC[k]) /
(Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded);
else
PrefetchBandwidth2 = 0;
@ -1086,7 +992,7 @@ static bool CalculatePrefetchSchedule(
PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow *
HostVMInefficiencyFactor + PrefetchSourceLinesY *
swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC *
swath_width_chroma_ub * BytePerPixelC) / (Tpre_rounded -
swath_width_chroma_ub * v->BytePerPixelC[k]) / (Tpre_rounded -
Tvm_trips_rounded);
else
PrefetchBandwidth3 = 0;
@ -1096,7 +1002,7 @@ static bool CalculatePrefetchSchedule(
}
if (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded > 0)
PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC)
PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k])
/ (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded);
else
PrefetchBandwidth4 = 0;
@ -1107,7 +1013,7 @@ static bool CalculatePrefetchSchedule(
bool Case3OK;
if (PrefetchBandwidth1 > 0) {
if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1
if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1
>= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth1 >= Tr0_trips_rounded) {
Case1OK = true;
} else {
@ -1118,7 +1024,7 @@ static bool CalculatePrefetchSchedule(
}
if (PrefetchBandwidth2 > 0) {
if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2
if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2
>= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth2 < Tr0_trips_rounded) {
Case2OK = true;
} else {
@ -1129,7 +1035,7 @@ static bool CalculatePrefetchSchedule(
}
if (PrefetchBandwidth3 > 0) {
if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3
if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3
< Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth3 >= Tr0_trips_rounded) {
Case3OK = true;
} else {
@ -1152,13 +1058,13 @@ static bool CalculatePrefetchSchedule(
dml_print("DML: prefetch_bw_equ: %f\n", prefetch_bw_equ);
if (prefetch_bw_equ > 0) {
if (GPUVMEnable) {
Tvm_equ = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4);
if (v->GPUVMEnable) {
Tvm_equ = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4);
} else {
Tvm_equ = LineTime / 4;
}
if ((GPUVMEnable || myPipe->DCCEnable)) {
if ((v->GPUVMEnable || myPipe->DCCEnable)) {
Tr0_equ = dml_max4(
(MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_equ,
Tr0_trips,
@ -1227,7 +1133,7 @@ static bool CalculatePrefetchSchedule(
}
*RequiredPrefetchPixDataBWLuma = (double) PrefetchSourceLinesY / LinesToRequestPrefetchPixelData * BytePerPixelY * swath_width_luma_ub / LineTime;
*RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * BytePerPixelC * swath_width_chroma_ub / LineTime;
*RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * v->BytePerPixelC[k] * swath_width_chroma_ub / LineTime;
} else {
MyError = true;
dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__);
@ -1243,9 +1149,9 @@ static bool CalculatePrefetchSchedule(
dml_print("DML: Tr0: %fus - time to fetch first row of data pagetables and first row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank);
dml_print("DML: Tr1: %fus - time to fetch second row of data pagetables and second row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank);
dml_print("DML: Tsw: %fus = time to fetch enough pixel data and cursor data to feed the scalers init position and detile\n", (double)LinesToRequestPrefetchPixelData * LineTime);
dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime);
dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime);
dml_print("DML: Tvstartup - Tsetup - Tcalc - Twait - Tpre - To > 0\n");
dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup);
dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup);
dml_print("DML: row_bytes = dpte_row_bytes (per_pipe) = PixelPTEBytesPerRow = : %d\n", PixelPTEBytesPerRow);
} else {
@ -1276,7 +1182,7 @@ static bool CalculatePrefetchSchedule(
dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__);
}
*prefetch_vmrow_bw = dml_max(prefetch_vm_bw, prefetch_row_bw);
v->prefetch_vmrow_bw[k] = dml_max(prefetch_vm_bw, prefetch_row_bw);
}
if (MyError) {
@ -2437,30 +2343,12 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
v->ErrorResult[k] = CalculatePrefetchSchedule(
mode_lib,
v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData,
v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly,
k,
&myPipe,
v->DSCDelay[k],
v->DPPCLKDelaySubtotal
+ v->DPPCLKDelayCNVCFormater,
v->DPPCLKDelaySCL,
v->DPPCLKDelaySCLLBOnly,
v->DPPCLKDelayCNVCCursor,
v->DISPCLKDelaySubtotal,
(unsigned int) (v->SwathWidthY[k] / v->HRatio[k]),
v->OutputFormat[k],
v->MaxInterDCNTileRepeaters,
dml_min(v->VStartupLines, v->MaxVStartupLines[k]),
v->MaxVStartupLines[k],
v->GPUVMMaxPageTableLevels,
v->GPUVMEnable,
v->HostVMEnable,
v->HostVMMaxNonCachedPageTableLevels,
v->HostVMMinPageSize,
v->DynamicMetadataEnable[k],
v->DynamicMetadataVMEnabled,
v->DynamicMetadataLinesBeforeActiveRequired[k],
v->DynamicMetadataTransmittedBytes[k],
v->UrgentLatency,
v->UrgentExtraLatency,
v->TCalc,
@ -2474,7 +2362,6 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
v->MaxNumSwathY[k],
v->PrefetchSourceLinesC[k],
v->SwathWidthC[k],
v->BytePerPixelC[k],
v->VInitPreFillC[k],
v->MaxNumSwathC[k],
v->swath_width_luma_ub[k],
@ -2482,9 +2369,6 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
v->SwathHeightY[k],
v->SwathHeightC[k],
TWait,
v->ProgressiveToInterlaceUnitInOPP,
&v->DSTXAfterScaler[k],
&v->DSTYAfterScaler[k],
&v->DestinationLinesForPrefetch[k],
&v->PrefetchBandwidth[k],
&v->DestinationLinesToRequestVMInVBlank[k],
@ -2493,14 +2377,7 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
&v->VRatioPrefetchC[k],
&v->RequiredPrefetchPixDataBWLuma[k],
&v->RequiredPrefetchPixDataBWChroma[k],
&v->NotEnoughTimeForDynamicMetadata[k],
&v->Tno_bw[k],
&v->prefetch_vmrow_bw[k],
&v->Tdmdl_vm[k],
&v->Tdmdl[k],
&v->VUpdateOffsetPix[k],
&v->VUpdateWidthPix[k],
&v->VReadyOffsetPix[k]);
&v->NotEnoughTimeForDynamicMetadata[k]);
if (v->BlendingAndTiming[k] == k) {
double TotalRepeaterDelayTime = v->MaxInterDCNTileRepeaters * (2 / v->DPPCLK[k] + 3 / v->DISPCLK);
v->VUpdateWidthPix[k] = (14 / v->DCFCLKDeepSleep + 12 / v->DPPCLK[k] + TotalRepeaterDelayTime) * v->PixelClock[k];
@ -2730,62 +2607,23 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
CalculateWatermarksAndDRAMSpeedChangeSupport(
mode_lib,
PrefetchMode,
v->NumberOfActivePlanes,
v->MaxLineBufferLines,
v->LineBufferSize,
v->DPPOutputBufferPixels,
v->DETBufferSizeInKByte[0],
v->WritebackInterfaceBufferSize,
v->DCFCLK,
v->ReturnBW,
v->GPUVMEnable,
v->dpte_group_bytes,
v->MetaChunkSize,
v->UrgentLatency,
v->UrgentExtraLatency,
v->WritebackLatency,
v->WritebackChunkSize,
v->SOCCLK,
v->FinalDRAMClockChangeLatency,
v->SRExitTime,
v->SREnterPlusExitTime,
v->DCFCLKDeepSleep,
v->DPPPerPlane,
v->DCCEnable,
v->DPPCLK,
v->DETBufferSizeY,
v->DETBufferSizeC,
v->SwathHeightY,
v->SwathHeightC,
v->LBBitPerPixel,
v->SwathWidthY,
v->SwathWidthC,
v->HRatio,
v->HRatioChroma,
v->vtaps,
v->VTAPsChroma,
v->VRatio,
v->VRatioChroma,
v->HTotal,
v->PixelClock,
v->BlendingAndTiming,
v->BytePerPixelDETY,
v->BytePerPixelDETC,
v->DSTXAfterScaler,
v->DSTYAfterScaler,
v->WritebackEnable,
v->WritebackPixelFormat,
v->WritebackDestinationWidth,
v->WritebackDestinationHeight,
v->WritebackSourceHeight,
&DRAMClockChangeSupport,
&v->UrgentWatermark,
&v->WritebackUrgentWatermark,
&v->DRAMClockChangeWatermark,
&v->WritebackDRAMClockChangeWatermark,
&v->StutterExitWatermark,
&v->StutterEnterPlusExitWatermark,
&v->MinActiveDRAMClockChangeLatencySupported);
&DRAMClockChangeSupport);
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
if (v->WritebackEnable[k] == true) {
@ -4770,29 +4608,12 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
v->NoTimeForPrefetch[i][j][k] = CalculatePrefetchSchedule(
mode_lib,
v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData,
v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly,
k,
&myPipe,
v->DSCDelayPerState[i][k],
v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater,
v->DPPCLKDelaySCL,
v->DPPCLKDelaySCLLBOnly,
v->DPPCLKDelayCNVCCursor,
v->DISPCLKDelaySubtotal,
v->SwathWidthYThisState[k] / v->HRatio[k],
v->OutputFormat[k],
v->MaxInterDCNTileRepeaters,
dml_min(v->MaxVStartup, v->MaximumVStartup[i][j][k]),
v->MaximumVStartup[i][j][k],
v->GPUVMMaxPageTableLevels,
v->GPUVMEnable,
v->HostVMEnable,
v->HostVMMaxNonCachedPageTableLevels,
v->HostVMMinPageSize,
v->DynamicMetadataEnable[k],
v->DynamicMetadataVMEnabled,
v->DynamicMetadataLinesBeforeActiveRequired[k],
v->DynamicMetadataTransmittedBytes[k],
v->UrgLatency[i],
v->ExtraLatency,
v->TimeCalc,
@ -4806,7 +4627,6 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
v->MaxNumSwY[k],
v->PrefetchLinesC[i][j][k],
v->SwathWidthCThisState[k],
v->BytePerPixelC[k],
v->PrefillC[k],
v->MaxNumSwC[k],
v->swath_width_luma_ub_this_state[k],
@ -4814,9 +4634,6 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
v->SwathHeightYThisState[k],
v->SwathHeightCThisState[k],
v->TWait,
v->ProgressiveToInterlaceUnitInOPP,
&v->DSTXAfterScaler[k],
&v->DSTYAfterScaler[k],
&v->LineTimesForPrefetch[k],
&v->PrefetchBW[k],
&v->LinesForMetaPTE[k],
@ -4825,14 +4642,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
&v->VRatioPreC[i][j][k],
&v->RequiredPrefetchPixelDataBWLuma[i][j][k],
&v->RequiredPrefetchPixelDataBWChroma[i][j][k],
&v->NoTimeForDynamicMetadata[i][j][k],
&v->Tno_bw[k],
&v->prefetch_vmrow_bw[k],
&v->Tdmdl_vm[k],
&v->Tdmdl[k],
&v->VUpdateOffsetPix[k],
&v->VUpdateWidthPix[k],
&v->VReadyOffsetPix[k]);
&v->NoTimeForDynamicMetadata[i][j][k]);
}
for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) {
@ -5007,62 +4817,23 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
CalculateWatermarksAndDRAMSpeedChangeSupport(
mode_lib,
v->PrefetchModePerState[i][j],
v->NumberOfActivePlanes,
v->MaxLineBufferLines,
v->LineBufferSize,
v->DPPOutputBufferPixels,
v->DETBufferSizeInKByte[0],
v->WritebackInterfaceBufferSize,
v->DCFCLKState[i][j],
v->ReturnBWPerState[i][j],
v->GPUVMEnable,
v->dpte_group_bytes,
v->MetaChunkSize,
v->UrgLatency[i],
v->ExtraLatency,
v->WritebackLatency,
v->WritebackChunkSize,
v->SOCCLKPerState[i],
v->FinalDRAMClockChangeLatency,
v->SRExitTime,
v->SREnterPlusExitTime,
v->ProjectedDCFCLKDeepSleep[i][j],
v->NoOfDPPThisState,
v->DCCEnable,
v->RequiredDPPCLKThisState,
v->DETBufferSizeYThisState,
v->DETBufferSizeCThisState,
v->SwathHeightYThisState,
v->SwathHeightCThisState,
v->LBBitPerPixel,
v->SwathWidthYThisState,
v->SwathWidthCThisState,
v->HRatio,
v->HRatioChroma,
v->vtaps,
v->VTAPsChroma,
v->VRatio,
v->VRatioChroma,
v->HTotal,
v->PixelClock,
v->BlendingAndTiming,
v->BytePerPixelInDETY,
v->BytePerPixelInDETC,
v->DSTXAfterScaler,
v->DSTYAfterScaler,
v->WritebackEnable,
v->WritebackPixelFormat,
v->WritebackDestinationWidth,
v->WritebackDestinationHeight,
v->WritebackSourceHeight,
&v->DRAMClockChangeSupport[i][j],
&v->UrgentWatermark,
&v->WritebackUrgentWatermark,
&v->DRAMClockChangeWatermark,
&v->WritebackDRAMClockChangeWatermark,
&v->StutterExitWatermark,
&v->StutterEnterPlusExitWatermark,
&v->MinActiveDRAMClockChangeLatencySupported);
&v->DRAMClockChangeSupport[i][j]);
}
}
@ -5179,63 +4950,25 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
static void CalculateWatermarksAndDRAMSpeedChangeSupport(
struct display_mode_lib *mode_lib,
unsigned int PrefetchMode,
unsigned int NumberOfActivePlanes,
unsigned int MaxLineBufferLines,
unsigned int LineBufferSize,
unsigned int DPPOutputBufferPixels,
unsigned int DETBufferSizeInKByte,
unsigned int WritebackInterfaceBufferSize,
double DCFCLK,
double ReturnBW,
bool GPUVMEnable,
unsigned int dpte_group_bytes[],
unsigned int MetaChunkSize,
double UrgentLatency,
double ExtraLatency,
double WritebackLatency,
double WritebackChunkSize,
double SOCCLK,
double DRAMClockChangeLatency,
double SRExitTime,
double SREnterPlusExitTime,
double DCFCLKDeepSleep,
unsigned int DPPPerPlane[],
bool DCCEnable[],
double DPPCLK[],
unsigned int DETBufferSizeY[],
unsigned int DETBufferSizeC[],
unsigned int SwathHeightY[],
unsigned int SwathHeightC[],
unsigned int LBBitPerPixel[],
double SwathWidthY[],
double SwathWidthC[],
double HRatio[],
double HRatioChroma[],
unsigned int vtaps[],
unsigned int VTAPsChroma[],
double VRatio[],
double VRatioChroma[],
unsigned int HTotal[],
double PixelClock[],
unsigned int BlendingAndTiming[],
double BytePerPixelDETY[],
double BytePerPixelDETC[],
double DSTXAfterScaler[],
double DSTYAfterScaler[],
bool WritebackEnable[],
enum source_format_class WritebackPixelFormat[],
double WritebackDestinationWidth[],
double WritebackDestinationHeight[],
double WritebackSourceHeight[],
enum clock_change_support *DRAMClockChangeSupport,
double *UrgentWatermark,
double *WritebackUrgentWatermark,
double *DRAMClockChangeWatermark,
double *WritebackDRAMClockChangeWatermark,
double *StutterExitWatermark,
double *StutterEnterPlusExitWatermark,
double *MinActiveDRAMClockChangeLatencySupported)
enum clock_change_support *DRAMClockChangeSupport)
{
struct vba_vars_st *v = &mode_lib->vba;
double EffectiveLBLatencyHidingY = 0;
double EffectiveLBLatencyHidingC = 0;
double LinesInDETY[DC__NUM_DPP__MAX] = { 0 };
@ -5254,101 +4987,101 @@ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
double WritebackDRAMClockChangeLatencyHiding = 0;
unsigned int k, j;
mode_lib->vba.TotalActiveDPP = 0;
mode_lib->vba.TotalDCCActiveDPP = 0;
for (k = 0; k < NumberOfActivePlanes; ++k) {
mode_lib->vba.TotalActiveDPP = mode_lib->vba.TotalActiveDPP + DPPPerPlane[k];
if (DCCEnable[k] == true) {
mode_lib->vba.TotalDCCActiveDPP = mode_lib->vba.TotalDCCActiveDPP + DPPPerPlane[k];
v->TotalActiveDPP = 0;
v->TotalDCCActiveDPP = 0;
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
v->TotalActiveDPP = v->TotalActiveDPP + DPPPerPlane[k];
if (v->DCCEnable[k] == true) {
v->TotalDCCActiveDPP = v->TotalDCCActiveDPP + DPPPerPlane[k];
}
}
*UrgentWatermark = UrgentLatency + ExtraLatency;
v->UrgentWatermark = UrgentLatency + ExtraLatency;
*DRAMClockChangeWatermark = DRAMClockChangeLatency + *UrgentWatermark;
v->DRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->UrgentWatermark;
mode_lib->vba.TotalActiveWriteback = 0;
for (k = 0; k < NumberOfActivePlanes; ++k) {
if (WritebackEnable[k] == true) {
mode_lib->vba.TotalActiveWriteback = mode_lib->vba.TotalActiveWriteback + 1;
v->TotalActiveWriteback = 0;
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
if (v->WritebackEnable[k] == true) {
v->TotalActiveWriteback = v->TotalActiveWriteback + 1;
}
}
if (mode_lib->vba.TotalActiveWriteback <= 1) {
*WritebackUrgentWatermark = WritebackLatency;
if (v->TotalActiveWriteback <= 1) {
v->WritebackUrgentWatermark = v->WritebackLatency;
} else {
*WritebackUrgentWatermark = WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
v->WritebackUrgentWatermark = v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
}
if (mode_lib->vba.TotalActiveWriteback <= 1) {
*WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency;
if (v->TotalActiveWriteback <= 1) {
v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency;
} else {
*WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
}
for (k = 0; k < NumberOfActivePlanes; ++k) {
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
mode_lib->vba.LBLatencyHidingSourceLinesY = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (vtaps[k] - 1);
v->LBLatencyHidingSourceLinesY = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(v->HRatio[k], 1.0)), 1)) - (v->vtaps[k] - 1);
mode_lib->vba.LBLatencyHidingSourceLinesC = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTAPsChroma[k] - 1);
v->LBLatencyHidingSourceLinesC = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(v->HRatioChroma[k], 1.0)), 1)) - (v->VTAPsChroma[k] - 1);
EffectiveLBLatencyHidingY = mode_lib->vba.LBLatencyHidingSourceLinesY / VRatio[k] * (HTotal[k] / PixelClock[k]);
EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / v->VRatio[k] * (v->HTotal[k] / v->PixelClock[k]);
EffectiveLBLatencyHidingC = mode_lib->vba.LBLatencyHidingSourceLinesC / VRatioChroma[k] * (HTotal[k] / PixelClock[k]);
EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / v->VRatioChroma[k] * (v->HTotal[k] / v->PixelClock[k]);
LinesInDETY[k] = (double) DETBufferSizeY[k] / BytePerPixelDETY[k] / SwathWidthY[k];
LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]);
FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k];
FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k];
if (BytePerPixelDETC[k] > 0) {
LinesInDETC = mode_lib->vba.DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k];
LinesInDETC = v->DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k];
LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]);
FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (HTotal[k] / PixelClock[k]) / VRatioChroma[k];
FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (v->HTotal[k] / v->PixelClock[k]) / v->VRatioChroma[k];
} else {
LinesInDETC = 0;
FullDETBufferingTimeC = 999999;
}
ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark;
ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark;
if (NumberOfActivePlanes > 1) {
ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightY[k] * HTotal[k] / PixelClock[k] / VRatio[k];
if (v->NumberOfActivePlanes > 1) {
ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightY[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatio[k];
}
if (BytePerPixelDETC[k] > 0) {
ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark;
ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark;
if (NumberOfActivePlanes > 1) {
ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightC[k] * HTotal[k] / PixelClock[k] / VRatioChroma[k];
if (v->NumberOfActivePlanes > 1) {
ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightC[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatioChroma[k];
}
mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC);
v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC);
} else {
mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY;
v->ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY;
}
if (WritebackEnable[k] == true) {
if (v->WritebackEnable[k] == true) {
WritebackDRAMClockChangeLatencyHiding = WritebackInterfaceBufferSize * 1024 / (WritebackDestinationWidth[k] * WritebackDestinationHeight[k] / (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4);
if (WritebackPixelFormat[k] == dm_444_64) {
WritebackDRAMClockChangeLatencyHiding = v->WritebackInterfaceBufferSize * 1024 / (v->WritebackDestinationWidth[k] * v->WritebackDestinationHeight[k] / (v->WritebackSourceHeight[k] * v->HTotal[k] / v->PixelClock[k]) * 4);
if (v->WritebackPixelFormat[k] == dm_444_64) {
WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2;
}
if (mode_lib->vba.WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) {
if (v->WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) {
WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding * 2;
}
WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - mode_lib->vba.WritebackDRAMClockChangeWatermark;
mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin);
WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - v->WritebackDRAMClockChangeWatermark;
v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(v->ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin);
}
}
mode_lib->vba.MinActiveDRAMClockChangeMargin = 999999;
v->MinActiveDRAMClockChangeMargin = 999999;
PlaneWithMinActiveDRAMClockChangeMargin = 0;
for (k = 0; k < NumberOfActivePlanes; ++k) {
if (mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < mode_lib->vba.MinActiveDRAMClockChangeMargin) {
mode_lib->vba.MinActiveDRAMClockChangeMargin = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k];
if (BlendingAndTiming[k] == k) {
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
if (v->ActiveDRAMClockChangeLatencyMargin[k] < v->MinActiveDRAMClockChangeMargin) {
v->MinActiveDRAMClockChangeMargin = v->ActiveDRAMClockChangeLatencyMargin[k];
if (v->BlendingAndTiming[k] == k) {
PlaneWithMinActiveDRAMClockChangeMargin = k;
} else {
for (j = 0; j < NumberOfActivePlanes; ++j) {
if (BlendingAndTiming[k] == j) {
for (j = 0; j < v->NumberOfActivePlanes; ++j) {
if (v->BlendingAndTiming[k] == j) {
PlaneWithMinActiveDRAMClockChangeMargin = j;
}
}
@ -5356,40 +5089,40 @@ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
}
}
*MinActiveDRAMClockChangeLatencySupported = mode_lib->vba.MinActiveDRAMClockChangeMargin + DRAMClockChangeLatency;
v->MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + v->FinalDRAMClockChangeLatency;
SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999;
for (k = 0; k < NumberOfActivePlanes; ++k) {
if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (BlendingAndTiming[k] == k)) && !(BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) {
SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k];
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (v->BlendingAndTiming[k] == k)) && !(v->BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && v->ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) {
SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = v->ActiveDRAMClockChangeLatencyMargin[k];
}
}
mode_lib->vba.TotalNumberOfActiveOTG = 0;
for (k = 0; k < NumberOfActivePlanes; ++k) {
if (BlendingAndTiming[k] == k) {
mode_lib->vba.TotalNumberOfActiveOTG = mode_lib->vba.TotalNumberOfActiveOTG + 1;
v->TotalNumberOfActiveOTG = 0;
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
if (v->BlendingAndTiming[k] == k) {
v->TotalNumberOfActiveOTG = v->TotalNumberOfActiveOTG + 1;
}
}
if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 0) {
if (v->MinActiveDRAMClockChangeMargin > 0) {
*DRAMClockChangeSupport = dm_dram_clock_change_vactive;
} else if (((mode_lib->vba.SynchronizedVBlank == true || mode_lib->vba.TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) {
} else if (((v->SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) {
*DRAMClockChangeSupport = dm_dram_clock_change_vblank;
} else {
*DRAMClockChangeSupport = dm_dram_clock_change_unsupported;
}
FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[0];
for (k = 0; k < NumberOfActivePlanes; ++k) {
for (k = 0; k < v->NumberOfActivePlanes; ++k) {
if (FullDETBufferingTimeY[k] <= FullDETBufferingTimeYStutterCriticalPlane) {
FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[k];
TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (HTotal[k] / PixelClock[k]) / VRatio[k];
TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k];
}
}
*StutterExitWatermark = SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep;
*StutterEnterPlusExitWatermark = dml_max(SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane);
v->StutterExitWatermark = v->SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep;
v->StutterEnterPlusExitWatermark = dml_max(v->SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane);
}

View File

@ -1610,38 +1610,12 @@ dce110_select_crtc_source(struct pipe_ctx *pipe_ctx)
struct dc_bios *bios = link->ctx->dc_bios;
struct bp_crtc_source_select crtc_source_select = {0};
enum engine_id engine_id = link->link_enc->preferred_engine;
uint8_t bit_depth;
if (dc_is_rgb_signal(pipe_ctx->stream->signal))
engine_id = link->link_enc->analog_engine;
switch (pipe_ctx->stream->timing.display_color_depth) {
case COLOR_DEPTH_UNDEFINED:
bit_depth = 0;
break;
case COLOR_DEPTH_666:
bit_depth = 6;
break;
default:
case COLOR_DEPTH_888:
bit_depth = 8;
break;
case COLOR_DEPTH_101010:
bit_depth = 10;
break;
case COLOR_DEPTH_121212:
bit_depth = 12;
break;
case COLOR_DEPTH_141414:
bit_depth = 14;
break;
case COLOR_DEPTH_161616:
bit_depth = 16;
break;
}
crtc_source_select.controller_id = CONTROLLER_ID_D0 + pipe_ctx->stream_res.tg->inst;
crtc_source_select.bit_depth = bit_depth;
crtc_source_select.color_depth = pipe_ctx->stream->timing.display_color_depth;
crtc_source_select.engine_id = engine_id;
crtc_source_select.sink_signal = pipe_ctx->stream->signal;

View File

@ -932,7 +932,7 @@ static bool link_detect_dac_load_detect(struct dc_link *link)
struct link_encoder *link_enc = link->link_enc;
enum engine_id engine_id = link_enc->preferred_engine;
enum dal_device_type device_type = DEVICE_TYPE_CRT;
enum bp_result bp_result;
enum bp_result bp_result = BP_RESULT_UNSUPPORTED;
uint32_t enum_id;
switch (engine_id) {
@ -946,7 +946,9 @@ static bool link_detect_dac_load_detect(struct dc_link *link)
break;
}
bp_result = bios->funcs->dac_load_detection(bios, engine_id, device_type, enum_id);
if (bios->funcs->dac_load_detection)
bp_result = bios->funcs->dac_load_detection(bios, engine_id, device_type, enum_id);
return bp_result == BP_RESULT_OK;
}

View File

@ -136,7 +136,7 @@ struct bp_crtc_source_select {
enum engine_id engine_id;
enum controller_id controller_id;
enum signal_type sink_signal;
uint8_t bit_depth;
enum dc_color_depth color_depth;
};
struct bp_transmitter_control {

View File

@ -2455,24 +2455,21 @@ static int navi10_update_pcie_parameters(struct smu_context *smu,
}
for (i = 0; i < NUM_LINK_LEVELS; i++) {
if (pptable->PcieGenSpeed[i] > pcie_gen_cap ||
pptable->PcieLaneCount[i] > pcie_width_cap) {
dpm_context->dpm_tables.pcie_table.pcie_gen[i] =
pptable->PcieGenSpeed[i] > pcie_gen_cap ?
pcie_gen_cap : pptable->PcieGenSpeed[i];
dpm_context->dpm_tables.pcie_table.pcie_lane[i] =
pptable->PcieLaneCount[i] > pcie_width_cap ?
pcie_width_cap : pptable->PcieLaneCount[i];
smu_pcie_arg = i << 16;
smu_pcie_arg |= pcie_gen_cap << 8;
smu_pcie_arg |= pcie_width_cap;
ret = smu_cmn_send_smc_msg_with_param(smu,
SMU_MSG_OverridePcieParameters,
smu_pcie_arg,
NULL);
if (ret)
break;
}
dpm_context->dpm_tables.pcie_table.pcie_gen[i] =
pptable->PcieGenSpeed[i] > pcie_gen_cap ?
pcie_gen_cap : pptable->PcieGenSpeed[i];
dpm_context->dpm_tables.pcie_table.pcie_lane[i] =
pptable->PcieLaneCount[i] > pcie_width_cap ?
pcie_width_cap : pptable->PcieLaneCount[i];
smu_pcie_arg = i << 16;
smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_gen[i] << 8;
smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_lane[i];
ret = smu_cmn_send_smc_msg_with_param(smu,
SMU_MSG_OverridePcieParameters,
smu_pcie_arg,
NULL);
if (ret)
return ret;
}
return ret;

View File

@ -2923,8 +2923,13 @@ static int smu_v13_0_0_mode1_reset(struct smu_context *smu)
break;
}
if (!ret)
if (!ret) {
/* disable mmio access while doing mode 1 reset*/
smu->adev->no_hw_access = true;
/* ensure no_hw_access is globally visible before any MMIO */
smp_mb();
msleep(SMU13_MODE1_RESET_WAIT_TIME_IN_MS);
}
return ret;
}

View File

@ -2143,10 +2143,15 @@ static int smu_v14_0_2_mode1_reset(struct smu_context *smu)
ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset);
if (!ret) {
if (amdgpu_emu_mode == 1)
if (amdgpu_emu_mode == 1) {
msleep(50000);
else
} else {
/* disable mmio access while doing mode 1 reset*/
smu->adev->no_hw_access = true;
/* ensure no_hw_access is globally visible before any MMIO */
smp_mb();
msleep(1000);
}
}
return ret;

View File

@ -1162,8 +1162,18 @@ crtc_needs_disable(struct drm_crtc_state *old_state,
new_state->self_refresh_active;
}
static void
encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_encoder_bridge_disable - disable bridges and encoder
* @dev: DRM device
* @state: the driver state object
*
* Loops over all connectors in the current state and if the CRTC needs
* it, disables the bridge chain all the way, then disables the encoder
* afterwards.
*/
void
drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev,
struct drm_atomic_state *state)
{
struct drm_connector *connector;
struct drm_connector_state *old_conn_state, *new_conn_state;
@ -1229,9 +1239,18 @@ encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state)
}
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_disable);
static void
crtc_disable(struct drm_device *dev, struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_crtc_disable - disable CRTSs
* @dev: DRM device
* @state: the driver state object
*
* Loops over all CRTCs in the current state and if the CRTC needs
* it, disables it.
*/
void
drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, struct drm_atomic_state *state)
{
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state, *new_crtc_state;
@ -1282,9 +1301,18 @@ crtc_disable(struct drm_device *dev, struct drm_atomic_state *state)
drm_crtc_vblank_put(crtc);
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_disable);
static void
encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_encoder_bridge_post_disable - post-disable encoder bridges
* @dev: DRM device
* @state: the driver state object
*
* Loops over all connectors in the current state and if the CRTC needs
* it, post-disables all encoder bridges.
*/
void
drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state)
{
struct drm_connector *connector;
struct drm_connector_state *old_conn_state, *new_conn_state;
@ -1335,15 +1363,16 @@ encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *sta
drm_bridge_put(bridge);
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_post_disable);
static void
disable_outputs(struct drm_device *dev, struct drm_atomic_state *state)
{
encoder_bridge_disable(dev, state);
drm_atomic_helper_commit_encoder_bridge_disable(dev, state);
crtc_disable(dev, state);
drm_atomic_helper_commit_encoder_bridge_post_disable(dev, state);
encoder_bridge_post_disable(dev, state);
drm_atomic_helper_commit_crtc_disable(dev, state);
}
/**
@ -1446,8 +1475,17 @@ void drm_atomic_helper_calc_timestamping_constants(struct drm_atomic_state *stat
}
EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants);
static void
crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_crtc_set_mode - set the new mode
* @dev: DRM device
* @state: the driver state object
*
* Loops over all connectors in the current state and if the mode has
* changed, change the mode of the CRTC, then call down the bridge
* chain and change the mode in all bridges as well.
*/
void
drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state)
{
struct drm_crtc *crtc;
struct drm_crtc_state *new_crtc_state;
@ -1508,6 +1546,7 @@ crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state)
drm_bridge_put(bridge);
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_set_mode);
/**
* drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs
@ -1531,12 +1570,21 @@ void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev,
drm_atomic_helper_update_legacy_modeset_state(dev, state);
drm_atomic_helper_calc_timestamping_constants(state);
crtc_set_mode(dev, state);
drm_atomic_helper_commit_crtc_set_mode(dev, state);
}
EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables);
static void drm_atomic_helper_commit_writebacks(struct drm_device *dev,
struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_writebacks - issue writebacks
* @dev: DRM device
* @state: atomic state object being committed
*
* This loops over the connectors, checks if the new state requires
* a writeback job to be issued and in that case issues an atomic
* commit on each connector.
*/
void drm_atomic_helper_commit_writebacks(struct drm_device *dev,
struct drm_atomic_state *state)
{
struct drm_connector *connector;
struct drm_connector_state *new_conn_state;
@ -1555,9 +1603,18 @@ static void drm_atomic_helper_commit_writebacks(struct drm_device *dev,
}
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_writebacks);
static void
encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_encoder_bridge_pre_enable - pre-enable bridges
* @dev: DRM device
* @state: atomic state object being committed
*
* This loops over the connectors and if the CRTC needs it, pre-enables
* the entire bridge chain.
*/
void
drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state)
{
struct drm_connector *connector;
struct drm_connector_state *new_conn_state;
@ -1588,9 +1645,18 @@ encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state
drm_bridge_put(bridge);
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_pre_enable);
static void
crtc_enable(struct drm_device *dev, struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_crtc_enable - enables the CRTCs
* @dev: DRM device
* @state: atomic state object being committed
*
* This loops over CRTCs in the new state, and of the CRTC needs
* it, enables it.
*/
void
drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, struct drm_atomic_state *state)
{
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state;
@ -1619,9 +1685,18 @@ crtc_enable(struct drm_device *dev, struct drm_atomic_state *state)
}
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_enable);
static void
encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)
/**
* drm_atomic_helper_commit_encoder_bridge_enable - enables the bridges
* @dev: DRM device
* @state: atomic state object being committed
*
* This loops over all connectors in the new state, and of the CRTC needs
* it, enables the entire bridge chain.
*/
void
drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)
{
struct drm_connector *connector;
struct drm_connector_state *new_conn_state;
@ -1664,6 +1739,7 @@ encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)
drm_bridge_put(bridge);
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_enable);
/**
* drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs
@ -1682,11 +1758,11 @@ encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)
void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
struct drm_atomic_state *state)
{
encoder_bridge_pre_enable(dev, state);
drm_atomic_helper_commit_crtc_enable(dev, state);
crtc_enable(dev, state);
drm_atomic_helper_commit_encoder_bridge_pre_enable(dev, state);
encoder_bridge_enable(dev, state);
drm_atomic_helper_commit_encoder_bridge_enable(dev, state);
drm_atomic_helper_commit_writebacks(dev, state);
}

View File

@ -366,6 +366,9 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
{
struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work);
if (helper->info->state != FBINFO_STATE_RUNNING)
return;
drm_fb_helper_fb_dirty(helper);
}
@ -732,6 +735,13 @@ void drm_fb_helper_set_suspend_unlocked(struct drm_fb_helper *fb_helper,
if (fb_helper->info->state != FBINFO_STATE_RUNNING)
return;
/*
* Cancel pending damage work. During GPU reset, VBlank
* interrupts are disabled and drm_fb_helper_fb_dirty()
* would wait for VBlank timeout otherwise.
*/
cancel_work_sync(&fb_helper->damage_work);
console_lock();
} else {

View File

@ -1692,7 +1692,7 @@ static irqreturn_t hdmi_irq_thread(int irq, void *arg)
{
struct hdmi_context *hdata = arg;
mod_delayed_work(system_wq, &hdata->hotplug_work,
mod_delayed_work(system_percpu_wq, &hdata->hotplug_work,
msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS));
return IRQ_HANDLED;

View File

@ -1002,12 +1002,6 @@ static int mtk_dsi_host_attach(struct mipi_dsi_host *host,
return PTR_ERR(dsi->next_bridge);
}
/*
* set flag to request the DSI host bridge be pre-enabled before device bridge
* in the chain, so the DSI host is ready when the device bridge is pre-enabled
*/
dsi->next_bridge->pre_enable_prev_first = true;
drm_bridge_add(&dsi->bridge);
ret = component_add(host->dev, &mtk_dsi_component_ops);

View File

@ -30,6 +30,9 @@ ad102_gsp = {
.booter.ctor = ga102_gsp_booter_ctor,
.fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor,
.fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor,
.dtor = r535_gsp_dtor,
.oneinit = tu102_gsp_oneinit,
.init = tu102_gsp_init,

View File

@ -337,18 +337,12 @@ nvkm_gsp_fwsec_sb(struct nvkm_gsp *gsp)
}
int
nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp)
nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp)
{
return nvkm_gsp_fwsec_init(gsp, &gsp->fws.falcon.sb, "fwsec-sb",
NVFW_FALCON_APPIF_DMEMMAPPER_CMD_SB);
}
void
nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp)
{
nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb);
}
int
nvkm_gsp_fwsec_frts(struct nvkm_gsp *gsp)
{

View File

@ -47,6 +47,9 @@ ga100_gsp = {
.booter.ctor = tu102_gsp_booter_ctor,
.fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor,
.fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor,
.dtor = r535_gsp_dtor,
.oneinit = tu102_gsp_oneinit,
.init = tu102_gsp_init,

View File

@ -158,6 +158,9 @@ ga102_gsp_r535 = {
.booter.ctor = ga102_gsp_booter_ctor,
.fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor,
.fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor,
.dtor = r535_gsp_dtor,
.oneinit = tu102_gsp_oneinit,
.init = tu102_gsp_init,

View File

@ -7,9 +7,8 @@ enum nvkm_acr_lsf_id;
int nvkm_gsp_fwsec_frts(struct nvkm_gsp *);
int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *);
int nvkm_gsp_fwsec_sb(struct nvkm_gsp *);
void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *);
int nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp);
struct nvkm_gsp_fwif {
int version;
@ -52,6 +51,11 @@ struct nvkm_gsp_func {
struct nvkm_falcon *, struct nvkm_falcon_fw *);
} booter;
struct {
int (*ctor)(struct nvkm_gsp *);
void (*dtor)(struct nvkm_gsp *);
} fwsec_sb;
void (*dtor)(struct nvkm_gsp *);
int (*oneinit)(struct nvkm_gsp *);
int (*init)(struct nvkm_gsp *);
@ -67,6 +71,8 @@ extern const struct nvkm_falcon_func tu102_gsp_flcn;
extern const struct nvkm_falcon_fw_func tu102_gsp_fwsec;
int tu102_gsp_booter_ctor(struct nvkm_gsp *, const char *, const struct firmware *,
struct nvkm_falcon *, struct nvkm_falcon_fw *);
int tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *);
void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *);
int tu102_gsp_oneinit(struct nvkm_gsp *);
int tu102_gsp_init(struct nvkm_gsp *);
int tu102_gsp_fini(struct nvkm_gsp *, bool suspend);
@ -91,5 +97,18 @@ int r535_gsp_fini(struct nvkm_gsp *, bool suspend);
int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int,
struct nvkm_gsp **);
static inline int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp)
{
if (gsp->func->fwsec_sb.ctor)
return gsp->func->fwsec_sb.ctor(gsp);
return 0;
}
static inline void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp)
{
if (gsp->func->fwsec_sb.dtor)
gsp->func->fwsec_sb.dtor(gsp);
}
extern const struct nvkm_gsp_func gv100_gsp;
#endif

View File

@ -30,6 +30,18 @@
#include <nvfw/fw.h>
#include <nvfw/hs.h>
int
tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp)
{
return nvkm_gsp_fwsec_sb_init(gsp);
}
void
tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp)
{
nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb);
}
static int
tu102_gsp_booter_unload(struct nvkm_gsp *gsp, u32 mbox0, u32 mbox1)
{
@ -370,6 +382,9 @@ tu102_gsp = {
.booter.ctor = tu102_gsp_booter_ctor,
.fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor,
.fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor,
.dtor = r535_gsp_dtor,
.oneinit = tu102_gsp_oneinit,
.init = tu102_gsp_init,

View File

@ -30,6 +30,9 @@ tu116_gsp = {
.booter.ctor = tu102_gsp_booter_ctor,
.fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor,
.fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor,
.dtor = r535_gsp_dtor,
.oneinit = tu102_gsp_oneinit,
.init = tu102_gsp_init,

View File

@ -295,7 +295,7 @@ static int pl111_amba_probe(struct amba_device *amba_dev,
variant->name, priv);
if (ret != 0) {
dev_err(dev, "%s failed irq %d\n", __func__, ret);
return ret;
goto dev_put;
}
ret = pl111_modeset_init(drm);

View File

@ -450,7 +450,7 @@ typedef struct _ClockInfoArray{
//sizeof(ATOM_PPLIB_CLOCK_INFO)
UCHAR ucEntrySize;
UCHAR clockInfo[] __counted_by(ucNumEntries);
UCHAR clockInfo[] /*__counted_by(ucNumEntries)*/;
}ClockInfoArray;
typedef struct _NonClockInfoArray{

View File

@ -26,9 +26,33 @@ static void tidss_atomic_commit_tail(struct drm_atomic_state *old_state)
tidss_runtime_get(tidss);
drm_atomic_helper_commit_modeset_disables(ddev, old_state);
drm_atomic_helper_commit_planes(ddev, old_state, DRM_PLANE_COMMIT_ACTIVE_ONLY);
drm_atomic_helper_commit_modeset_enables(ddev, old_state);
/*
* TI's OLDI and DSI encoders need to be set up before the crtc is
* enabled. Thus drm_atomic_helper_commit_modeset_enables() and
* drm_atomic_helper_commit_modeset_disables() cannot be used here, as
* they enable the crtc before bridges' pre-enable, and disable the crtc
* after bridges' post-disable.
*
* Open code the functions here and first call the bridges' pre-enables,
* then crtc enable, then bridges' post-enable (and vice versa for
* disable).
*/
drm_atomic_helper_commit_encoder_bridge_disable(ddev, old_state);
drm_atomic_helper_commit_crtc_disable(ddev, old_state);
drm_atomic_helper_commit_encoder_bridge_post_disable(ddev, old_state);
drm_atomic_helper_update_legacy_modeset_state(ddev, old_state);
drm_atomic_helper_calc_timestamping_constants(old_state);
drm_atomic_helper_commit_crtc_set_mode(ddev, old_state);
drm_atomic_helper_commit_planes(ddev, old_state,
DRM_PLANE_COMMIT_ACTIVE_ONLY);
drm_atomic_helper_commit_encoder_bridge_pre_enable(ddev, old_state);
drm_atomic_helper_commit_crtc_enable(ddev, old_state);
drm_atomic_helper_commit_encoder_bridge_enable(ddev, old_state);
drm_atomic_helper_commit_writebacks(ddev, old_state);
drm_atomic_helper_commit_hw_done(old_state);
drm_atomic_helper_wait_for_flip_done(ddev, old_state);

View File

@ -3,7 +3,7 @@ config NOVA_CORE
depends on 64BIT
depends on PCI
depends on RUST
depends on RUST_FW_LOADER_ABSTRACTIONS
select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
default n
help

View File

@ -588,21 +588,23 @@ impl Cmdq {
header.length(),
);
let payload_length = header.payload_length();
// Check that the driver read area is large enough for the message.
if slice_1.len() + slice_2.len() < header.length() {
if slice_1.len() + slice_2.len() < payload_length {
return Err(EIO);
}
// Cut the message slices down to the actual length of the message.
let (slice_1, slice_2) = if slice_1.len() > header.length() {
// PANIC: we checked above that `slice_1` is at least as long as `msg_header.length()`.
(slice_1.split_at(header.length()).0, &slice_2[0..0])
let (slice_1, slice_2) = if slice_1.len() > payload_length {
// PANIC: we checked above that `slice_1` is at least as long as `payload_length`.
(slice_1.split_at(payload_length).0, &slice_2[0..0])
} else {
(
slice_1,
// PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as
// large as `msg_header.length()`.
slice_2.split_at(header.length() - slice_1.len()).0,
// large as `payload_length`.
slice_2.split_at(payload_length - slice_1.len()).0,
)
};

View File

@ -141,8 +141,8 @@ unsafe impl AsBytes for GspFwWprMeta {}
// are valid.
unsafe impl FromBytes for GspFwWprMeta {}
type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1;
type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1;
type GspFwWprMetaBootResumeInfo = bindings::GspFwWprMeta__bindgen_ty_1;
type GspFwWprMetaBootInfo = bindings::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1;
impl GspFwWprMeta {
/// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the
@ -150,8 +150,8 @@ impl GspFwWprMeta {
pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self {
Self(bindings::GspFwWprMeta {
// CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified.
magic: r570_144::GSP_FW_WPR_META_MAGIC as u64,
revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION),
magic: bindings::GSP_FW_WPR_META_MAGIC as u64,
revision: u64::from(bindings::GSP_FW_WPR_META_REVISION),
sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(),
sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size),
sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(),
@ -315,19 +315,19 @@ impl From<MsgFunction> for u32 {
#[repr(u32)]
pub(crate) enum SeqBufOpcode {
// Core operation opcodes
CoreReset = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET,
CoreResume = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME,
CoreStart = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START,
CoreWaitForHalt = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT,
CoreReset = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET,
CoreResume = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME,
CoreStart = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START,
CoreWaitForHalt = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT,
// Delay opcode
DelayUs = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US,
DelayUs = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US,
// Register operation opcodes
RegModify = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY,
RegPoll = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL,
RegStore = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE,
RegWrite = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE,
RegModify = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY,
RegPoll = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL,
RegStore = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE,
RegWrite = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE,
}
impl fmt::Display for SeqBufOpcode {
@ -351,25 +351,25 @@ impl TryFrom<u32> for SeqBufOpcode {
fn try_from(value: u32) -> Result<SeqBufOpcode> {
match value {
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => {
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => {
Ok(SeqBufOpcode::CoreReset)
}
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => {
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => {
Ok(SeqBufOpcode::CoreResume)
}
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => {
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => {
Ok(SeqBufOpcode::CoreStart)
}
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => {
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => {
Ok(SeqBufOpcode::CoreWaitForHalt)
}
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs),
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => {
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs),
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => {
Ok(SeqBufOpcode::RegModify)
}
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll),
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore),
r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite),
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll),
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore),
bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite),
_ => Err(EINVAL),
}
}
@ -385,7 +385,7 @@ impl From<SeqBufOpcode> for u32 {
/// Wrapper for GSP sequencer register write payload.
#[repr(transparent)]
#[derive(Copy, Clone)]
pub(crate) struct RegWritePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_WRITE);
pub(crate) struct RegWritePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_WRITE);
impl RegWritePayload {
/// Returns the register address.
@ -408,7 +408,7 @@ unsafe impl AsBytes for RegWritePayload {}
/// Wrapper for GSP sequencer register modify payload.
#[repr(transparent)]
#[derive(Copy, Clone)]
pub(crate) struct RegModifyPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY);
pub(crate) struct RegModifyPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY);
impl RegModifyPayload {
/// Returns the register address.
@ -436,7 +436,7 @@ unsafe impl AsBytes for RegModifyPayload {}
/// Wrapper for GSP sequencer register poll payload.
#[repr(transparent)]
#[derive(Copy, Clone)]
pub(crate) struct RegPollPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_POLL);
pub(crate) struct RegPollPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_POLL);
impl RegPollPayload {
/// Returns the register address.
@ -469,7 +469,7 @@ unsafe impl AsBytes for RegPollPayload {}
/// Wrapper for GSP sequencer delay payload.
#[repr(transparent)]
#[derive(Copy, Clone)]
pub(crate) struct DelayUsPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_DELAY_US);
pub(crate) struct DelayUsPayload(bindings::GSP_SEQ_BUF_PAYLOAD_DELAY_US);
impl DelayUsPayload {
/// Returns the delay value in microseconds.
@ -487,7 +487,7 @@ unsafe impl AsBytes for DelayUsPayload {}
/// Wrapper for GSP sequencer register store payload.
#[repr(transparent)]
#[derive(Copy, Clone)]
pub(crate) struct RegStorePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_STORE);
pub(crate) struct RegStorePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_STORE);
impl RegStorePayload {
/// Returns the register address.
@ -510,7 +510,7 @@ unsafe impl AsBytes for RegStorePayload {}
/// Wrapper for GSP sequencer buffer command.
#[repr(transparent)]
pub(crate) struct SequencerBufferCmd(r570_144::GSP_SEQUENCER_BUFFER_CMD);
pub(crate) struct SequencerBufferCmd(bindings::GSP_SEQUENCER_BUFFER_CMD);
impl SequencerBufferCmd {
/// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid.
@ -612,7 +612,7 @@ unsafe impl AsBytes for SequencerBufferCmd {}
/// Wrapper for GSP run CPU sequencer RPC.
#[repr(transparent)]
pub(crate) struct RunCpuSequencer(r570_144::rpc_run_cpu_sequencer_v17_00);
pub(crate) struct RunCpuSequencer(bindings::rpc_run_cpu_sequencer_v17_00);
impl RunCpuSequencer {
/// Returns the command index.
@ -797,13 +797,6 @@ impl bindings::rpc_message_header_v {
}
}
// SAFETY: We can't derive the Zeroable trait for this binding because the
// procedural macro doesn't support the syntax used by bindgen to create the
// __IncompleteArrayField types. So instead we implement it here, which is safe
// because these are explicitly padded structures only containing types for
// which any bit pattern, including all zeros, is valid.
unsafe impl Zeroable for bindings::rpc_message_header_v {}
/// GSP Message Element.
///
/// This is essentially a message header expected to be followed by the message data.
@ -853,11 +846,16 @@ impl GspMsgElement {
self.inner.checkSum = checksum;
}
/// Returns the total length of the message.
/// Returns the length of the message's payload.
pub(crate) fn payload_length(&self) -> usize {
// `rpc.length` includes the length of the RPC message header.
num::u32_as_usize(self.inner.rpc.length)
.saturating_sub(size_of::<bindings::rpc_message_header_v>())
}
/// Returns the total length of the message, message and RPC headers included.
pub(crate) fn length(&self) -> usize {
// `rpc.length` includes the length of the GspRpcHeader but not the message header.
size_of::<Self>() - size_of::<bindings::rpc_message_header_v>()
+ num::u32_as_usize(self.inner.rpc.length)
size_of::<Self>() + self.payload_length()
}
// Returns the sequence number of the message.

View File

@ -24,8 +24,11 @@
unreachable_pub,
unsafe_op_in_unsafe_fn
)]
use kernel::{
ffi,
prelude::Zeroable, //
};
use kernel::ffi;
use pin_init::MaybeZeroable;
include!("r570_144/bindings.rs");
// SAFETY: This type has a size of zero, so its inclusion into another type should not affect their
// ability to implement `Zeroable`.
unsafe impl<T> kernel::prelude::Zeroable for __IncompleteArrayField<T> {}

View File

@ -320,11 +320,12 @@ pub const NV_VGPU_MSG_EVENT_RECOVERY_ACTION: _bindgen_ty_3 = 4130;
pub const NV_VGPU_MSG_EVENT_NUM_EVENTS: _bindgen_ty_3 = 4131;
pub type _bindgen_ty_3 = ffi::c_uint;
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS {
pub totalVFs: u32_,
pub firstVfOffset: u32_,
pub vfFeatureMask: u32_,
pub __bindgen_padding_0: [u8; 4usize],
pub FirstVFBar0Address: u64_,
pub FirstVFBar1Address: u64_,
pub FirstVFBar2Address: u64_,
@ -340,23 +341,26 @@ pub struct NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS {
pub bClientRmAllocatedCtxBuffer: u8_,
pub bNonPowerOf2ChannelCountSupported: u8_,
pub bVfResizableBAR1Supported: u8_,
pub __bindgen_padding_1: [u8; 7usize],
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS {
pub BoardID: u32_,
pub chipSKU: [ffi::c_char; 9usize],
pub chipSKUMod: [ffi::c_char; 5usize],
pub __bindgen_padding_0: [u8; 2usize],
pub skuConfigVersion: u32_,
pub project: [ffi::c_char; 5usize],
pub projectSKU: [ffi::c_char; 5usize],
pub CDP: [ffi::c_char; 6usize],
pub projectSKUMod: [ffi::c_char; 2usize],
pub __bindgen_padding_1: [u8; 2usize],
pub businessCycle: u32_,
}
pub type NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG = [u8_; 17usize];
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO {
pub base: u64_,
pub limit: u64_,
@ -368,13 +372,14 @@ pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO {
pub blackList: NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS {
pub numFBRegions: u32_,
pub __bindgen_padding_0: [u8; 4usize],
pub fbRegion: [NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO; 16usize],
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
#[derive(Debug, Copy, Clone, MaybeZeroable)]
pub struct NV2080_CTRL_GPU_GET_GID_INFO_PARAMS {
pub index: u32_,
pub flags: u32_,
@ -391,14 +396,14 @@ impl Default for NV2080_CTRL_GPU_GET_GID_INFO_PARAMS {
}
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct DOD_METHOD_DATA {
pub status: u32_,
pub acpiIdListLen: u32_,
pub acpiIdList: [u32_; 16usize],
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct JT_METHOD_DATA {
pub status: u32_,
pub jtCaps: u32_,
@ -407,14 +412,14 @@ pub struct JT_METHOD_DATA {
pub __bindgen_padding_0: u8,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct MUX_METHOD_DATA_ELEMENT {
pub acpiId: u32_,
pub mode: u32_,
pub status: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct MUX_METHOD_DATA {
pub tableLen: u32_,
pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
@ -422,13 +427,13 @@ pub struct MUX_METHOD_DATA {
pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct CAPS_METHOD_DATA {
pub status: u32_,
pub optimusCaps: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct ACPI_METHOD_DATA {
pub bValid: u8_,
pub __bindgen_padding_0: [u8; 3usize],
@ -438,20 +443,20 @@ pub struct ACPI_METHOD_DATA {
pub capsMethodData: CAPS_METHOD_DATA,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS {
pub headIndex: u32_,
pub maxHResolution: u32_,
pub maxVResolution: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS {
pub numHeads: u32_,
pub maxNumHeads: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct BUSINFO {
pub deviceID: u16_,
pub vendorID: u16_,
@ -461,7 +466,7 @@ pub struct BUSINFO {
pub __bindgen_padding_0: u8,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_VF_INFO {
pub totalVFs: u32_,
pub firstVFOffset: u32_,
@ -474,34 +479,37 @@ pub struct GSP_VF_INFO {
pub __bindgen_padding_0: [u8; 5usize],
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_PCIE_CONFIG_REG {
pub linkCap: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct EcidManufacturingInfo {
pub ecidLow: u32_,
pub ecidHigh: u32_,
pub ecidExtended: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct FW_WPR_LAYOUT_OFFSET {
pub nonWprHeapOffset: u64_,
pub frtsOffset: u64_,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
#[derive(Debug, Copy, Clone, MaybeZeroable)]
pub struct GspStaticConfigInfo_t {
pub grCapsBits: [u8_; 23usize],
pub __bindgen_padding_0: u8,
pub gidInfo: NV2080_CTRL_GPU_GET_GID_INFO_PARAMS,
pub SKUInfo: NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS,
pub __bindgen_padding_1: [u8; 4usize],
pub fbRegionInfoParams: NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS,
pub sriovCaps: NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS,
pub sriovMaxGfid: u32_,
pub engineCaps: [u32_; 3usize],
pub poisonFuseEnabled: u8_,
pub __bindgen_padding_2: [u8; 7usize],
pub fb_length: u64_,
pub fbio_mask: u64_,
pub fb_bus_width: u32_,
@ -527,16 +535,20 @@ pub struct GspStaticConfigInfo_t {
pub bIsMigSupported: u8_,
pub RTD3GC6TotalBoardPower: u16_,
pub RTD3GC6PerstDelay: u16_,
pub __bindgen_padding_3: [u8; 2usize],
pub bar1PdeBase: u64_,
pub bar2PdeBase: u64_,
pub bVbiosValid: u8_,
pub __bindgen_padding_4: [u8; 3usize],
pub vbiosSubVendor: u32_,
pub vbiosSubDevice: u32_,
pub bPageRetirementSupported: u8_,
pub bSplitVasBetweenServerClientRm: u8_,
pub bClRootportNeedsNosnoopWAR: u8_,
pub __bindgen_padding_5: u8,
pub displaylessMaxHeads: VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS,
pub displaylessMaxResolution: VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS,
pub __bindgen_padding_6: [u8; 4usize],
pub displaylessMaxPixels: u64_,
pub hInternalClient: u32_,
pub hInternalDevice: u32_,
@ -558,7 +570,7 @@ impl Default for GspStaticConfigInfo_t {
}
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GspSystemInfo {
pub gpuPhysAddr: u64_,
pub gpuPhysFbAddr: u64_,
@ -615,7 +627,7 @@ pub struct GspSystemInfo {
pub hostPageSize: u64_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct MESSAGE_QUEUE_INIT_ARGUMENTS {
pub sharedMemPhysAddr: u64_,
pub pageTableEntryCount: u32_,
@ -624,7 +636,7 @@ pub struct MESSAGE_QUEUE_INIT_ARGUMENTS {
pub statQueueOffset: u64_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_SR_INIT_ARGUMENTS {
pub oldLevel: u32_,
pub flags: u32_,
@ -632,7 +644,7 @@ pub struct GSP_SR_INIT_ARGUMENTS {
pub __bindgen_padding_0: [u8; 3usize],
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_ARGUMENTS_CACHED {
pub messageQueueInitArguments: MESSAGE_QUEUE_INIT_ARGUMENTS,
pub srInitArguments: GSP_SR_INIT_ARGUMENTS,
@ -642,13 +654,13 @@ pub struct GSP_ARGUMENTS_CACHED {
pub profilerArgs: GSP_ARGUMENTS_CACHED__bindgen_ty_1,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_ARGUMENTS_CACHED__bindgen_ty_1 {
pub pa: u64_,
pub size: u64_,
}
#[repr(C)]
#[derive(Copy, Clone, Zeroable)]
#[derive(Copy, Clone, MaybeZeroable)]
pub union rpc_message_rpc_union_field_v03_00 {
pub spare: u32_,
pub cpuRmGfid: u32_,
@ -664,6 +676,7 @@ impl Default for rpc_message_rpc_union_field_v03_00 {
}
pub type rpc_message_rpc_union_field_v = rpc_message_rpc_union_field_v03_00;
#[repr(C)]
#[derive(MaybeZeroable)]
pub struct rpc_message_header_v03_00 {
pub header_version: u32_,
pub signature: u32_,
@ -686,7 +699,7 @@ impl Default for rpc_message_header_v03_00 {
}
pub type rpc_message_header_v = rpc_message_header_v03_00;
#[repr(C)]
#[derive(Copy, Clone, Zeroable)]
#[derive(Copy, Clone, MaybeZeroable)]
pub struct GspFwWprMeta {
pub magic: u64_,
pub revision: u64_,
@ -721,19 +734,19 @@ pub struct GspFwWprMeta {
pub verified: u64_,
}
#[repr(C)]
#[derive(Copy, Clone, Zeroable)]
#[derive(Copy, Clone, MaybeZeroable)]
pub union GspFwWprMeta__bindgen_ty_1 {
pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_1__bindgen_ty_1,
pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_1__bindgen_ty_2,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_1 {
pub sysmemAddrOfSignature: u64_,
pub sizeOfSignature: u64_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_2 {
pub gspFwHeapFreeListWprOffset: u32_,
pub unused0: u32_,
@ -749,13 +762,13 @@ impl Default for GspFwWprMeta__bindgen_ty_1 {
}
}
#[repr(C)]
#[derive(Copy, Clone, Zeroable)]
#[derive(Copy, Clone, MaybeZeroable)]
pub union GspFwWprMeta__bindgen_ty_2 {
pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_2__bindgen_ty_1,
pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_2__bindgen_ty_2,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 {
pub partitionRpcAddr: u64_,
pub partitionRpcRequestOffset: u16_,
@ -767,7 +780,7 @@ pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 {
pub lsUcodeVersion: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_2 {
pub partitionRpcPadding: [u32_; 4usize],
pub sysmemAddrOfCrashReportQueue: u64_,
@ -802,7 +815,7 @@ pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_SYSMEM: LibosMemoryRegion
pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc = 2;
pub type LibosMemoryRegionLoc = ffi::c_uint;
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct LibosMemoryRegionInitArgument {
pub id8: LibosAddress,
pub pa: LibosAddress,
@ -812,7 +825,7 @@ pub struct LibosMemoryRegionInitArgument {
pub __bindgen_padding_0: [u8; 6usize],
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct PACKED_REGISTRY_ENTRY {
pub nameOffset: u32_,
pub type_: u8_,
@ -821,14 +834,14 @@ pub struct PACKED_REGISTRY_ENTRY {
pub length: u32_,
}
#[repr(C)]
#[derive(Debug, Default)]
#[derive(Debug, Default, MaybeZeroable)]
pub struct PACKED_REGISTRY_TABLE {
pub size: u32_,
pub numEntries: u32_,
pub entries: __IncompleteArrayField<PACKED_REGISTRY_ENTRY>,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct msgqTxHeader {
pub version: u32_,
pub size: u32_,
@ -840,13 +853,13 @@ pub struct msgqTxHeader {
pub entryOff: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct msgqRxHeader {
pub readPtr: u32_,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Zeroable)]
#[derive(MaybeZeroable)]
pub struct GSP_MSG_QUEUE_ELEMENT {
pub authTagBuffer: [u8_; 16usize],
pub aadBuffer: [u8_; 16usize],
@ -866,7 +879,7 @@ impl Default for GSP_MSG_QUEUE_ELEMENT {
}
}
#[repr(C)]
#[derive(Debug, Default)]
#[derive(Debug, Default, MaybeZeroable)]
pub struct rpc_run_cpu_sequencer_v17_00 {
pub bufferSizeDWord: u32_,
pub cmdIndex: u32_,
@ -884,20 +897,20 @@ pub const GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT: GSP_SEQ_BUF_
pub const GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME: GSP_SEQ_BUF_OPCODE = 8;
pub type GSP_SEQ_BUF_OPCODE = ffi::c_uint;
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_SEQ_BUF_PAYLOAD_REG_WRITE {
pub addr: u32_,
pub val: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_SEQ_BUF_PAYLOAD_REG_MODIFY {
pub addr: u32_,
pub mask: u32_,
pub val: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_SEQ_BUF_PAYLOAD_REG_POLL {
pub addr: u32_,
pub mask: u32_,
@ -906,24 +919,24 @@ pub struct GSP_SEQ_BUF_PAYLOAD_REG_POLL {
pub error: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_SEQ_BUF_PAYLOAD_DELAY_US {
pub val: u32_,
}
#[repr(C)]
#[derive(Debug, Default, Copy, Clone)]
#[derive(Debug, Default, Copy, Clone, MaybeZeroable)]
pub struct GSP_SEQ_BUF_PAYLOAD_REG_STORE {
pub addr: u32_,
pub index: u32_,
}
#[repr(C)]
#[derive(Copy, Clone)]
#[derive(Copy, Clone, MaybeZeroable)]
pub struct GSP_SEQUENCER_BUFFER_CMD {
pub opCode: GSP_SEQ_BUF_OPCODE,
pub payload: GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1,
}
#[repr(C)]
#[derive(Copy, Clone)]
#[derive(Copy, Clone, MaybeZeroable)]
pub union GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1 {
pub regWrite: GSP_SEQ_BUF_PAYLOAD_REG_WRITE,
pub regModify: GSP_SEQ_BUF_PAYLOAD_REG_MODIFY,

View File

@ -652,13 +652,6 @@ static bool vga_is_boot_device(struct vga_device *vgadev)
return true;
}
/*
* Vgadev has neither IO nor MEM enabled. If we haven't found any
* other VGA devices, it is the best candidate so far.
*/
if (!boot_vga)
return true;
return false;
}

View File

@ -60,6 +60,12 @@ int drm_atomic_helper_check_plane_state(struct drm_plane_state *plane_state,
int drm_atomic_helper_check_planes(struct drm_device *dev,
struct drm_atomic_state *state);
int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state);
void drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_crtc_disable(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev,
struct drm_atomic_state *state);
int drm_atomic_helper_check(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_tail(struct drm_atomic_state *state);
@ -89,8 +95,24 @@ drm_atomic_helper_update_legacy_modeset_state(struct drm_device *dev,
void
drm_atomic_helper_calc_timestamping_constants(struct drm_atomic_state *state);
void drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_writebacks(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_crtc_enable(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev,
struct drm_atomic_state *state);
void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
struct drm_atomic_state *old_state);

View File

@ -176,33 +176,17 @@ struct drm_bridge_funcs {
/**
* @disable:
*
* The @disable callback should disable the bridge.
* This callback should disable the bridge. It is called right before
* the preceding element in the display pipe is disabled. If the
* preceding element is a bridge this means it's called before that
* bridge's @disable vfunc. If the preceding element is a &drm_encoder
* it's called right before the &drm_encoder_helper_funcs.disable,
* &drm_encoder_helper_funcs.prepare or &drm_encoder_helper_funcs.dpms
* hook.
*
* The bridge can assume that the display pipe (i.e. clocks and timing
* signals) feeding it is still running when this callback is called.
*
*
* If the preceding element is a &drm_bridge, then this is called before
* that bridge is disabled via one of:
*
* - &drm_bridge_funcs.disable
* - &drm_bridge_funcs.atomic_disable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called before the encoder is disabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_disable
* - &drm_encoder_helper_funcs.prepare
* - &drm_encoder_helper_funcs.disable
* - &drm_encoder_helper_funcs.dpms
*
* and the CRTC is disabled via one of:
*
* - &drm_crtc_helper_funcs.prepare
* - &drm_crtc_helper_funcs.atomic_disable
* - &drm_crtc_helper_funcs.disable
* - &drm_crtc_helper_funcs.dpms.
*
* The @disable callback is optional.
*
* NOTE:
@ -215,34 +199,17 @@ struct drm_bridge_funcs {
/**
* @post_disable:
*
* This callback should disable the bridge. It is called right after the
* preceding element in the display pipe is disabled. If the preceding
* element is a bridge this means it's called after that bridge's
* @post_disable function. If the preceding element is a &drm_encoder
* it's called right after the encoder's
* &drm_encoder_helper_funcs.disable, &drm_encoder_helper_funcs.prepare
* or &drm_encoder_helper_funcs.dpms hook.
*
* The bridge must assume that the display pipe (i.e. clocks and timing
* signals) feeding this bridge is no longer running when the
* @post_disable is called.
*
* This callback should perform all the actions required by the hardware
* after it has stopped receiving signals from the preceding element.
*
* If the preceding element is a &drm_bridge, then this is called after
* that bridge is post-disabled (unless marked otherwise by the
* @pre_enable_prev_first flag) via one of:
*
* - &drm_bridge_funcs.post_disable
* - &drm_bridge_funcs.atomic_post_disable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called after the encoder is disabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_disable
* - &drm_encoder_helper_funcs.prepare
* - &drm_encoder_helper_funcs.disable
* - &drm_encoder_helper_funcs.dpms
*
* and the CRTC is disabled via one of:
*
* - &drm_crtc_helper_funcs.prepare
* - &drm_crtc_helper_funcs.atomic_disable
* - &drm_crtc_helper_funcs.disable
* - &drm_crtc_helper_funcs.dpms
* signals) feeding it is no longer running when this callback is
* called.
*
* The @post_disable callback is optional.
*
@ -285,30 +252,18 @@ struct drm_bridge_funcs {
/**
* @pre_enable:
*
* This callback should enable the bridge. It is called right before
* the preceding element in the display pipe is enabled. If the
* preceding element is a bridge this means it's called before that
* bridge's @pre_enable function. If the preceding element is a
* &drm_encoder it's called right before the encoder's
* &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or
* &drm_encoder_helper_funcs.dpms hook.
*
* The display pipe (i.e. clocks and timing signals) feeding this bridge
* will not yet be running when the @pre_enable is called.
*
* This callback should perform all the necessary actions to prepare the
* bridge to accept signals from the preceding element.
*
* If the preceding element is a &drm_bridge, then this is called before
* that bridge is pre-enabled (unless marked otherwise by
* @pre_enable_prev_first flag) via one of:
*
* - &drm_bridge_funcs.pre_enable
* - &drm_bridge_funcs.atomic_pre_enable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called before the CRTC is enabled via one of:
*
* - &drm_crtc_helper_funcs.atomic_enable
* - &drm_crtc_helper_funcs.commit
*
* and the encoder is enabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_enable
* - &drm_encoder_helper_funcs.enable
* - &drm_encoder_helper_funcs.commit
* will not yet be running when this callback is called. The bridge must
* not enable the display link feeding the next bridge in the chain (if
* there is one) when this callback is called.
*
* The @pre_enable callback is optional.
*
@ -322,31 +277,19 @@ struct drm_bridge_funcs {
/**
* @enable:
*
* The @enable callback should enable the bridge.
* This callback should enable the bridge. It is called right after
* the preceding element in the display pipe is enabled. If the
* preceding element is a bridge this means it's called after that
* bridge's @enable function. If the preceding element is a
* &drm_encoder it's called right after the encoder's
* &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or
* &drm_encoder_helper_funcs.dpms hook.
*
* The bridge can assume that the display pipe (i.e. clocks and timing
* signals) feeding it is running when this callback is called. This
* callback must enable the display link feeding the next bridge in the
* chain if there is one.
*
* If the preceding element is a &drm_bridge, then this is called after
* that bridge is enabled via one of:
*
* - &drm_bridge_funcs.enable
* - &drm_bridge_funcs.atomic_enable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called after the CRTC is enabled via one of:
*
* - &drm_crtc_helper_funcs.atomic_enable
* - &drm_crtc_helper_funcs.commit
*
* and the encoder is enabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_enable
* - &drm_encoder_helper_funcs.enable
* - drm_encoder_helper_funcs.commit
*
* The @enable callback is optional.
*
* NOTE:
@ -359,30 +302,17 @@ struct drm_bridge_funcs {
/**
* @atomic_pre_enable:
*
* This callback should enable the bridge. It is called right before
* the preceding element in the display pipe is enabled. If the
* preceding element is a bridge this means it's called before that
* bridge's @atomic_pre_enable or @pre_enable function. If the preceding
* element is a &drm_encoder it's called right before the encoder's
* &drm_encoder_helper_funcs.atomic_enable hook.
*
* The display pipe (i.e. clocks and timing signals) feeding this bridge
* will not yet be running when the @atomic_pre_enable is called.
*
* This callback should perform all the necessary actions to prepare the
* bridge to accept signals from the preceding element.
*
* If the preceding element is a &drm_bridge, then this is called before
* that bridge is pre-enabled (unless marked otherwise by
* @pre_enable_prev_first flag) via one of:
*
* - &drm_bridge_funcs.pre_enable
* - &drm_bridge_funcs.atomic_pre_enable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called before the CRTC is enabled via one of:
*
* - &drm_crtc_helper_funcs.atomic_enable
* - &drm_crtc_helper_funcs.commit
*
* and the encoder is enabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_enable
* - &drm_encoder_helper_funcs.enable
* - &drm_encoder_helper_funcs.commit
* will not yet be running when this callback is called. The bridge must
* not enable the display link feeding the next bridge in the chain (if
* there is one) when this callback is called.
*
* The @atomic_pre_enable callback is optional.
*/
@ -392,31 +322,18 @@ struct drm_bridge_funcs {
/**
* @atomic_enable:
*
* The @atomic_enable callback should enable the bridge.
* This callback should enable the bridge. It is called right after
* the preceding element in the display pipe is enabled. If the
* preceding element is a bridge this means it's called after that
* bridge's @atomic_enable or @enable function. If the preceding element
* is a &drm_encoder it's called right after the encoder's
* &drm_encoder_helper_funcs.atomic_enable hook.
*
* The bridge can assume that the display pipe (i.e. clocks and timing
* signals) feeding it is running when this callback is called. This
* callback must enable the display link feeding the next bridge in the
* chain if there is one.
*
* If the preceding element is a &drm_bridge, then this is called after
* that bridge is enabled via one of:
*
* - &drm_bridge_funcs.enable
* - &drm_bridge_funcs.atomic_enable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called after the CRTC is enabled via one of:
*
* - &drm_crtc_helper_funcs.atomic_enable
* - &drm_crtc_helper_funcs.commit
*
* and the encoder is enabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_enable
* - &drm_encoder_helper_funcs.enable
* - drm_encoder_helper_funcs.commit
*
* The @atomic_enable callback is optional.
*/
void (*atomic_enable)(struct drm_bridge *bridge,
@ -424,32 +341,16 @@ struct drm_bridge_funcs {
/**
* @atomic_disable:
*
* The @atomic_disable callback should disable the bridge.
* This callback should disable the bridge. It is called right before
* the preceding element in the display pipe is disabled. If the
* preceding element is a bridge this means it's called before that
* bridge's @atomic_disable or @disable vfunc. If the preceding element
* is a &drm_encoder it's called right before the
* &drm_encoder_helper_funcs.atomic_disable hook.
*
* The bridge can assume that the display pipe (i.e. clocks and timing
* signals) feeding it is still running when this callback is called.
*
* If the preceding element is a &drm_bridge, then this is called before
* that bridge is disabled via one of:
*
* - &drm_bridge_funcs.disable
* - &drm_bridge_funcs.atomic_disable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called before the encoder is disabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_disable
* - &drm_encoder_helper_funcs.prepare
* - &drm_encoder_helper_funcs.disable
* - &drm_encoder_helper_funcs.dpms
*
* and the CRTC is disabled via one of:
*
* - &drm_crtc_helper_funcs.prepare
* - &drm_crtc_helper_funcs.atomic_disable
* - &drm_crtc_helper_funcs.disable
* - &drm_crtc_helper_funcs.dpms.
*
* The @atomic_disable callback is optional.
*/
void (*atomic_disable)(struct drm_bridge *bridge,
@ -458,34 +359,16 @@ struct drm_bridge_funcs {
/**
* @atomic_post_disable:
*
* This callback should disable the bridge. It is called right after the
* preceding element in the display pipe is disabled. If the preceding
* element is a bridge this means it's called after that bridge's
* @atomic_post_disable or @post_disable function. If the preceding
* element is a &drm_encoder it's called right after the encoder's
* &drm_encoder_helper_funcs.atomic_disable hook.
*
* The bridge must assume that the display pipe (i.e. clocks and timing
* signals) feeding this bridge is no longer running when the
* @atomic_post_disable is called.
*
* This callback should perform all the actions required by the hardware
* after it has stopped receiving signals from the preceding element.
*
* If the preceding element is a &drm_bridge, then this is called after
* that bridge is post-disabled (unless marked otherwise by the
* @pre_enable_prev_first flag) via one of:
*
* - &drm_bridge_funcs.post_disable
* - &drm_bridge_funcs.atomic_post_disable
*
* If the preceding element of the bridge is a display controller, then
* this callback is called after the encoder is disabled via one of:
*
* - &drm_encoder_helper_funcs.atomic_disable
* - &drm_encoder_helper_funcs.prepare
* - &drm_encoder_helper_funcs.disable
* - &drm_encoder_helper_funcs.dpms
*
* and the CRTC is disabled via one of:
*
* - &drm_crtc_helper_funcs.prepare
* - &drm_crtc_helper_funcs.atomic_disable
* - &drm_crtc_helper_funcs.disable
* - &drm_crtc_helper_funcs.dpms
* signals) feeding it is no longer running when this callback is
* called.
*
* The @atomic_post_disable callback is optional.
*/