在线咨询
eetop公众号 创芯大讲堂 创芯人才网
切换到宽版

EETOP 创芯网论坛 (原名:电子顶级开发网)

手机号码,快捷登录

手机号码,快捷登录

找回密码

  登录   注册  

快捷导航
搜帖子
查看: 743|回复: 2

[资料] Prime Time User Guide 中英对照 第三章(中)

[复制链接]
发表于 2024-1-5 10:13:34 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?注册

x
其实纯英书籍最大的阅读障碍有三:1,词汇量 2,专业名词 3,长难句的语序问题导致的理解困难。1很好理解与解决,背!2一般都会有附录来解释或者从业过程中慢慢就了解了。3才是本人工作的重中之重。因为在这段时间的通读翻译过程中,很多时候是每个单词都认识,但是合成一句话之后就变得很难理解。当然还有一词多义导致的语句不通顺、不甚理解的情况(主要还是词汇量的问题)。这些问题都由我来慢慢解决。
话不多说,直接上炕!
Distributed Multi-Scenario Analysis
Verifying a chip design requires several PrimeTime runs to check correct operation under different operating conditions and different operating modes. A specific combination of operating conditions and operating modes for a given chip design is called a scenario.
验证芯片设计需要多次PrimeTime运行,以检查在不同工作条件和不同工作模式下的正确操作。给定芯片设计的工作条件和工作模式的特定组合称为scenario.
The number of scenarios for a design is:
Scenarios = [Sets of operating conditions] X [Modes]
The PrimeTime tool can analyze several scenarios in parallel with distributed multi-scenario analysis (DMSA). Instead of analyzing each scenario in sequence, DMSA uses a master PrimeTime process that sets up, executes, and controls multiple worker processes—one for each scenario. You can distribute the processing of scenarios onto different hosts running in parallel, reducing the overall turnaround time. Total runtime is reduced when you share common data between different scenarios.
PrimeTime工具可以使用分布式多scenario分析(DMSA)并行分析多个scenario。DMSA不是按顺序分析每个scenario,而是使用一个主PrimeTime进程来设置、执行和控制多个工作进程,每个进程对应一个scenario。您可以将scenario的处理分发到并行运行的不同主机上,从而缩短总体周转时间。在不同scenario之间共享通用数据时,总运行时间会减少。
A single script can control many scenarios, making it easier to set up and manage different sets of analyses. This capability does not use multithreading (breaking a single process into pieces that can be executed in parallel). Instead, it provides an efficient way to specify the analysis of different operating conditions and modes for a given design and to distribute those analysis tasks onto different hosts.
单个脚本可以控制多个scenario,从而可以更轻松地设置和管理不同的分析集。此功能不使用多线程(将单个进程分解为可以并行执行的片段)。相反,它提供了一种有效的方法,可以为给定设计指定不同操作条件和模式的分析,并将这些分析任务分配到不同的主机上。
To learn how to use DMSA, see
• Definition of Terms
• Overview of the DMSA Flow
• Distributed Processing Setup
• DMSA Batch Mode Script Example
• Baseline Image Generation and Storage
• Host Resource Affinity
• Scenario Variables and Attributes
• Merged Reporting
• Loading Scenario Data Into the Master
• Saving and Restoring Your Session
• License Resource Management
• Worker Process Fault Handling
• Messages and Log Files
• DMSA Variables and Commands
• Limitations of DMSA
Definition of Terms
The following terms describe aspects of distributed processing
baseline image
Image that is produced by combining the netlist image and the common data files for a scenario.
通过组合网表映像和scenario的公共数据文件生成的映像。
command focus
Current set of scenarios to which analysis commands are applied. The command focus can consist of all scenarios in the session or just a subset of those scenarios.
哪些分析命令应用的当前scenarios的设置。Command focus可以包含会话中的所有scenario,也可以仅包含这些scenario的子集。
current image
Image that is automatically saved to disk when there are more scenarios than hosts, and the worker process must switch to work on another scenario.
当scenario多于主机时,自动保存到磁盘的映像,并且工作进程必须切换到另一个scenario。
Master
Process that manages the distribution of scenario analysis processes.
管理scenario分析进程分布的流程。
scenario
Specific combination of operating conditions and operating modes.
操作条件和操作模式的具体组合。
session
Current set of scenarios selected for analysis.
当前选择用于分析的scenario集
task
Self-contained piece of work defined by the master for a worker to execute.
由主节点定义的独立工作,供工作人员执行
worker
Process started and controlled by the master to perform timing analysis for one scenario; also called a slave process.
由主站启动和控制的进程,对一个scenario进行时序分析;也称为从进程。
v2-9bb0a9e6581753458d296e086a6c3fc2_720w.jpg
A scenario describes the specific combination of operating conditions and operating modes to use when analyzing the design specified by the configuration. There is no limit to the number of scenarios that you can create (subject to memory limitations of the master). To create a scenario, use the create_scenario command, which specifies the scenario name and the names of the scripts that apply the analysis conditions and mode settings for the scenario.
scenario描述了在分析配置指定的设计时要使用的操作条件和操作模式的特定组合。您可以创建的scenario数量没有限制(受主节点的内存限制)。要创建scenario,请使用 create_scenario 命令,该命令指定scenario名称以及应用scenario分析条件和模式设置的脚本的名称。
The scripts are divided into two groups: common-data scripts and specific-data scripts. The common-data scripts are shared between two or more scenarios, whereas the specific-data scripts are specific to the particular scenario and are not shared. This grouping helps the master process manage tasks and to share information between different scenarios, minimizing the amount of duplicated effort for different scenarios.
脚本分为两组:公共数据脚本和特定数据脚本。公共数据脚本在两个或多个scenario之间共享,而特定数据脚本特定于特定scenario并且不共享。这种分组有助于主进程管理任务并在不同scenario之间共享信息,从而最大限度地减少不同scenarios的重复工作量。
A session describes the set of scenarios you want to analyze in parallel. You can select the scenarios for the current session using the current_session command. The current session can consist of all defined scenarios or just a subset.
session描述要并行分析的一组scenario。您可以使用 current_session 命令选择当前session的scenario。当前session可以包含所有定义的scenario,也可以仅包含一个子集。
The command focus is the set of scenarios affected by PrimeTime analysis commands entered at the PrimeTime prompt in the master process. The command focus can consist of all scenarios in the current session or just a subset specified by the current_scenario command. By default, all scenarios of the current session are in the command focus.
Command focus是受主进程中PrimeTime提示符下输入的PrimeTime分析命令影响的一组scenario。Command focus可以包含当前会话中的所有方案,也可以仅包含 current_scenario 命令指定的子集。默认情况下,当前会话的所有scenario都位于command focus中。
Overview of the DMSA Flow
To learn about the basics of the multi-scenario flow, see
• Preparing to Run DMSA
• DMSA Usage Flow
Preparing to Run DMSA
Before you start your multi-scenario analysis, you must set the search path and create a .synopsys_pt.setup file:
• Setting the Search Path
• .synopsys_pt.setup File
Setting the Search Path
In multi-scenario analysis, you can set the search_path variable only at the master. When reading in the search path, the master resolves all relative paths in the context of the master. The master process then automatically sets the fully resolved search path at the worker process. For example, you might launch the master in the /remote1/test/ms directory, and set the search_path variable with the following command:
在multi-scenario分析中,只能在主变量处设置search_path变量。在搜索路径中读取时,主节点在主节点的上下文中解析所有相对路径。然后,主进程会自动在工作进程上设置完全解析的搜索路径。例如,您可以在 /remote1/test/ms 目录中启动主节点,并使用以下命令设置 search_path 变量:
set search_path ". .. ../scripts"
The master automatically sets the search path of the worker to the following:
/remote1/test/ms /remote1/test /remote1/ms/scripts
The recommended flow in multi-scenario analysis is to set the search path to specify the location of
• All files for your scenarios and configurations
• All Tcl scripts and netlist, SDF, library, and parasitic files to be read in a worker context For best results, avoid using relative paths in worker context scripts.
• 所有 Tcl 脚本和网表、SDF、库和寄生文件都可以在工作线程上下文中读取 为获得最佳结果,请避免在工作线程上下文脚本中使用相对路径。
.synopsys_pt.setup File
The master and workers source the same set of .synopsys_pt.setup files in the following order:
1. PrimeTime install setup file at
install_dir/admin/setup/.synopsys_pt.setup
2. Setup file in your home directory at
~/.synopsys_pt.setup
3. Setup file in the master launch directory at
$sh_launch_dir/.synopsys_pt.setup
To control whether commands are executed in the current pt_shell mode, set the pt_shell_mode variable to primetime, primetime_master, or primetime_slave. For example:
if { $pt_shell_mode == "primetime_master" } { set multi_scenario_working_directory "./work" }
DMSA Usage Flow
You can use the DMSA capability for timing analysis in PrimeTime and PrimeTime SI as well as for power analysis in PrimePower. This dramatically reduces the turnaround time. For more information about using DMSA in PrimePower, see the “Distributed Peak Power Analysis” section in the PrimePower User Guide.
您可以使用DMSA功能在PrimeTime和PrimeTime SI中进行时序分析,并在PrimePower中进行功耗分析。这大大缩短了周转时间。有关在PrimePower中使用DMSA的更多信息,请参阅PrimePower用户指南中的“分布式峰值功率分析”部分。
From the pt_shell prompt of the master PrimeTime process, you can initiate and control up to 256 worker processes (see Limitations of DMSA).
在主PrimeTime进程的pt_shell提示下,您可以启动和控制多达256个工作进程(请参阅DMSA的限制)
v2-22c19e6907c7171a623593a5c662a1fb_720w.jpg
A typical multi-scenario analysis has the following steps:
1. Start PrimeTime in the multi-scenario analysis mode by running the pt_shell command with the -multi_scenario option. Alternatively, from a normal PrimeTime session, set the multi_scenario_enable_analysis variable to true.
1.通过运行带有-multi_scenario选项的pt_shell命令,在multi-scenario分析模式下启动PrimeTime。或者,在正常的PrimeTime session中,将multi_scenario_enable_analysis变量设置为true。
2. Create the scenarios with the create_scenario command. Each create_scenario command specifies a scenario name and the PrimeTime script files that apply the conditions for that scenario.
2. 使用 create_scenario 命令创建scenario。每个create_scenario命令都指定一个scenario名称和为该scenario应用条件的PrimeTime脚本文件。
3. Configure the compute resources that you want to use for the timing update and reporting by running the set_host_options command. This command does not start the host, but it sets up the host options for that host.
3. 通过运行 set_host_options 命令,配置要用于计时更新和报告的计算资源。此命令不会启动主机,但它会为该主机设置主机选项
In the following example, the host option is named ptopt030, rsh is used to connect to the host, and a maximum of three cores per process are specified for the compute resources:
在以下示例中,host 选项名为 ptopt030,rsh 用于连接到主机,并且每个进程最多为计算资源指定三个内核:
pt_shell> set_host_options -name my_opts -max_cores 3 \ -num_processes 4 -submit_command "/usr/bin/rsh -n" ptopt030
Specify the maximum number of cores by specifying the remote core count with the set_host_options command in the master script, as shown in the previous example.
4. Verify the host options by running the report_host_usage command. This command also reports peak memory and cpu usage for the local process and all distributed processes that are already online. The report displays the host options specified, status of the distributed processes, number of CPU cores each process uses, and licenses used by the distributed hosts.
4. 通过运行 report_host_usage 命令验证主机选项。此命令还报告本地进程和所有已联机的分布式进程的峰值内存和 CPU 使用率。该报告显示指定的主机选项、分布式进程的状态、每个进程使用的 CPU 内核数以及分布式主机使用的许可证。
v2-13c278e4a6014b1da47227a59f7c88e0_720w.jpg
5. Request compute resources and bring the hosts online by running the start_hosts command.
Note: If you set the multi_scenario_working_directory and multi_scenario_merged_error_log variables, do so before you start the compute resources.
5. 通过运行 start_hosts 命令请求计算资源并使主机联机。
注: 如果设置了 multi_scenario_working_directory 和 multi_scenario_merged_error_log 变量,请在启动计算资源之前执行此操作。
To provide more information about the hosts such as the status and process information, use the report_host_usage command after starting the distributed processes. For example:
要提供有关主机的详细信息(如状态和进程信息),请在启动分布式进程后使用 report_host_usage 命令。例如:
v2-84072e826d235fb3e2abeb58d26fdfad_720w.jpg
Note: You must complete steps 1 through 5 before you can run the current_session command in the next step.
注意:您必须先完成步骤 1 到 5,然后才能在下一步中运行 current_session 命令。
6. Select the scenarios for the session using the current_session command. The command specifies a list of scenarios previously created with the create_scenario command.
6. 使用 current_session 命令选择session的scenario。该命令指定先前使用 create_scenario 命令创建的scenario列表。
7. (Optional) Change the scenarios in the current session that are in command focus, using the current_scenario command. The command specifies a list of scenarios previously selected with the current_session command. By default, all scenarios in the current session are in command focus.
7. (可选)使用 current_scenario 命令更改当前会话中处于命令焦点的方案。该命令指定以前使用 current_session 命令选择的方案列表。默认情况下,当前会话中的所有方案都处于命令焦点中。
8. View the analysis report and fix validation issues
8. 查看分析报告并修复验证问题
a. Start processing the scenarios by executing the remote_execute command or performing a merged report command at the master. For more information about merged reports, see Merged Reporting
a.通过在主服务器执行 remote_execute 命令或执行合并报告命令来开始处理scenarios。有关合并报表的详细信息,请参阅合并报表
b. When the processing of all scenarios by the worker processes is complete, you can view the analysis reports. Locate the reports generated by the remote_execute command under the directory you specified with the multi_scenario_working_directory variable, as described in Distributed Processing Setup. Alternatively, if you issue the remote_execute command with the -verbose option, all information is displayed directly to the console at the master. The output of all merged reporting commands is displayed directly in the console at the master.
当工作进程完成所有scenario的处理后,您可以查看分析报告。在使用 multi_scenario_working_directory 变量指定的目录下找到 remote_execute 命令生成的报告,如分布式处理设置中所述。或者,如果发出带有 -verbose 选项的 remote_execute 命令,则所有信息都将直接显示在主服务器的控制台上。所有合并的报告命令的输出都直接显示在主站的控制台中。
c. Use ECO commands to fix timing and design rule violations. For more information about ECO commands, see ECO Flow.
Distributed Processing Setup
For distributed processing of multiple scenarios, you must first invoke the PrimeTime tool in DMSA mode. You then manage your compute resource, define the working directory, create the scenarios, and specify the current session and command focus. For details, see
对于multiple scenario的分布式处理,您必须先在DMSA模式下调用PrimeTime工具。然后,管理计算资源,定义工作目录,创建scenarios,并指定当前session和command focus。有关详细信息,请参阅 。
• Starting the Distributed Multi-Scenario Analysis Mode
• 启动分布式multi-scenario分析模式
• Managing Compute Resources
• 管理计算资源
• Creating Scenarios
• Specifying the Current Session and Command Focus
• 指定当前会话和命令焦点
• Executing Commands Remotely
• 远程执行命令
• DMSA Batch Mode Script Example
• DMSA 批处理模式脚本示例
Starting the Distributed Multi-Scenario Analysis Mode
To start the PrimeTime tool in DMSA mode, use the -multi_scenario option when you start the PrimeTime shell:
% pt_shell -multi_scenario
PrimeTime starts up and displays the pt_shell prompt just as in a single scenario session. A distributed multi-scenario analysis is carried out by one master PrimeTime process and multiple PrimeTime worker processes. The master and worker processes interact with each other using full-duplex network communications.
PrimeTime启动并显示pt_shell提示符,就像在单个scenario session中一样。分布式multi-scenario分析由一个主PrimeTime进程和多个PrimeTime工作进程执行。主进程和工作进程使用全双工网络通信相互交互。
The master process generates analysis tasks and manages the distribution of those tasks to the worker processes. It does not perform timing analysis and does not hold any data other than the netlist and the multi-scenario data entered. The command set of the master is therefore restricted to commands that perform master functions.
主进程生成分析任务,并管理这些任务到工作进程的分配。它不进行时序分析,也不保存网表和输入的multi-scenario数据以外的任何数据。因此,主机的命令集仅限于执行主机功能的命令。
To specify the directory in which to store working files for distributed processing, use the multi_scenario_working_directory variable.
若要指定存储分布式处理工作文件的目录,请使用 multi_scenario_working_directory 变量。
To specify the file in which to write all error, warning, and information messages issued by the workers, set the multi_scenario_merged_error_log variable. Any message issued by more than one worker is merged into a single entry in the merged error log. For details, see Merged Error Log.
若要指定在其中写入工作人员发出的所有错误、警告和信息消息的文件,请设置 multi_scenario_merged_error_log 变量。由多个工作机器发出的任何消息都将合并到合并的错误日志中的单个条目中。有关详细信息,请参阅合并的错误日志。
The multi_scenario_merged_error_log variable limits the number of messages of a particular type written to the log on a per task basis. The default is 100.
multi_scenario_merged_error_log 变量限制每个任务写入日志的特定类型的消息数。默认值为 100。
Upon startup, every PrimeTime process, both master and worker, looks for and sources the .synopsys_pt.setup
setup file just as in normal mode. Each process also writes its own separate log file. For more information, see Log Files.
Managing Compute Resources
Managing compute resources enables the tool to allocate pooled resources as they become available. If a computing management capability is available for load balancing, you can allocate hosts from these systems to the compute resources used for processing scenarios.
通过管理计算资源,该工具可以在共用资源可用时对其进行分配。如果计算管理功能可用于负载平衡,则可以将这些系统中的主机分配给用于处理scenario的计算资源。
The tool supports the following hardware management architectures:
该工具支持以下硬件管理体系结构:
• LSF (Load Sharing Facility from Platform Computing, Inc.)
• Grid computing (Global Resource Director from Gridware, Inc.)
• Generic computing farm
Unmanaged computing resources are computed servers/workstations that are not load balanced. They can be added to the compute resources used for processing scenarios and are internally load balanced during DMSA.
非托管计算资源是未进行负载平衡的计算服务器/工作站。它们可以添加到用于处理方案的计算资源中,并在 DMSA 期间进行内部负载均衡。
Setting Up Distributed Host Options
You can allocate worker hosts as compute resources with the set_host_options command. Each worker process invoked by the master is a single PrimeTime session that is accessible only by the master process. It is not possible to manually launch a PrimeTime process in worker mode.
您可以使用 set_host_options 命令将工作主机分配为计算资源。主进程调用的每个工作进程都是一个PrimeTime session,只能由主进程访问。无法在工作模式下手动启动PrimeTime进程。
The master generates tasks for the worker processes to execute. Any one task can apply to only one scenario and any one worker process can be configured for only one scenario at a time. If there are more scenarios than worker processes to run them, some workers need to swap out one scenario for another so that all are executed.
主服务器生成任务供工作进程执行。任何一个任务只能应用于一个scenario,并且一次只能为一个scenario配置任何一个工作进程。如果运行它们的场景多于工作进程,则某些工作线程需要将一个scenario换成另一个scenario,以便执行所有scenario。
It is important to avoid specifying more processes on a host than the number of available CPUs. Otherwise, much time is spent swapping worker processes in and out. For optimal system performance of DMSA, match the number of worker processes, scenarios in command focus, and available CPUs so that every scenario is executed in its own worker process on its own CPU.
请务必避免在主机上指定的进程数超过可用 CPU 的数量。否则,将花费大量时间来交换工作进程。为了获得 DMSA 的最佳系统性能,请匹配工作进程数、scenario in command focus和可用 CPU,以便每个scenario都在其自己的 CPU 上在其自己的工作进程中执行。
Specify the maximum number of cores each worker process can use by running the set_host_options command at the master. You can also specify this command in the .synopsys_pt.setup file that the remote processes execute; however, it is better to use the command at the master. The maximum cores specification in the .synopsys_pt.setup file has higher priority.
通过在主服务器上运行 set_host_options 命令来指定每个工作进程可以使用的最大内核数。您还可以在远程进程执行的 .synopsys_pt.setup 文件中指定此命令;但是,最好在主机器上使用该命令。.synopsys_pt.setup 文件中的最大核心数规范具有更高的优先级
You can additionally configure the master and worker processes by setting the following variables.
v2-4b05770cbc44a5ffec9609cb044d59cd_720w.jpg
To remove host options and terminate their associated processes, use the remove_host_options command. For example, to remove and stop host1 and host3:
要删除主机选项并终止其关联的进程,请使用 remove_host_options 命令。例如,要删除并停止 host1 和 host3:
pt_shell> remove_host_options {host1 host3}
Configure the Distributed Environment
To configure the distributed environment (remote launch script, worker startup script, and maximum levels of collection attributes), use the set_distributed_parameters command. Set the configuration before you run the start_hosts command.
要配置分布式环境(远程启动脚本、工作器启动脚本和集合属性的最高级别),请使用 set_distributed_parameters 命令。在运行 start_hosts 命令之前设置配置。
Starting the Compute Resources
To start the hosts specified by the set_host_options command, use the start_hosts command, which requests compute resources and brings the hosts online. The start_hosts command continues to run until one of the following events occurs:
要启动 set_host_options 命令指定的主机,请使用 start_hosts 命令,该命令将请求计算资源并使主机联机。start_hosts 命令将继续运行,直到发生以下事件之一:
• All requested remote host processes become available
• 所有请求的远程主机进程都可用
• The number of remote host processes that become available reaches the -min_hosts value specified in the start_hosts command (if specified)
• 可用的远程主机进程数达到 start_hosts 命令中指定的 -min_hosts 值(如果指定过)
• The number of seconds reaches the -timeout value specified in the start_hosts command (default 21600)
• 秒数达到 start_hosts 命令中指定的超时值(默认为 21600s)
The number of available hosts directly affects the ability of the PrimeTime tool to simultaneously analyze scenarios. When there are too few hosts available, expensive image swapping occurs until enough hosts become available.
可用主机的数量直接影响PrimeTime工具同时分析场景的能力。当可用的主机太少时,会发生成本高昂的映像交换,直到有足够的主机可用。
When you run the start_hosts command, any previously added hosts begin to transition through the states of their life cycle, as shown in the following diagram.
运行 start_hosts 命令时,任何以前添加的主机都将开始转换其生命周期的状态,如下图所示。
v2-4d585a42c2fbf7309ffebc63f312b736_720w.jpg
DMSA Virtual Workers
When the number of scenarios exceeds the number of available hosts, at least two scenarios must be assigned to run on a host. If multiple commands are executed in those scenarios, the tool must perform save and restore operations to swap designs in and out of memory for executing each command, which can consume significant runtime and network resources.
当scenario数超过可用主机数时,必须至少分配两个scenario才能在主机上运行。如果在这些scenario中执行多个命令,则该工具必须执行保存和恢复操作,以在内存中交换设计以执行每个命令,这可能会消耗大量运行时和网络资源。
In this situation, you can avoid the additional runtime and delay by setting a “load factor” in the set_host_options command:
在这种情况下,您可以通过在 set_host_options 命令中设置“负载因子”来避免额外的运行时长和延迟:
pt_shell> set_host_options -load_factor 2 …
The default load factor is 1, which disables the feature. Setting a value of 2 reduces save and restore operations at the cost of more memory.
默认荷载系数为 1,这将禁用该功能。将值设置为 2 可减少保存和还原操作,但会占用更多内存。
The feature works by creating virtual workers in memory that can each handle one scenario. A setting of 2 doubles the number of workers by creating one virtual worker in memory for each real worker. If the real and virtual workers can accept all the scenarios at the same time, there is no need for save and restore operations.
该功能的工作原理是在内存中创建虚拟工作线程,每个工作线程都可以处理一个scenario。设置为 2 时,通过在内存中为每个实际工作线程创建一个虚拟工作线程,使工作线程数量增加一倍。如果真实和虚拟 worker 可以同时接受所有scenario,则无需进行保存和恢复操作。
The following set_host_options command examples demonstrate this feature. The -submit_command option shows the syntax for submitting jobs to an LSF farm with a specific memory allocation; use the appropriate syntax for your installation.
以下set_host_options命令示例演示了此功能。-submit_command 选项显示将作业提交到具有特定内存分配的 LSF farm的语法;使用适合您安装的语法。
• 2 scenarios, multiple commands per scenario:
set_host_options -num_processes 2 -submit_command {bsub -n 16 -R "rusage[mem=16384]"}
The number of processes equals the number of scenarios, so there is no benefit from increasing the load factor.
进程数等于scenario数,因此增加负载系数没有任何好处。
• 4 scenarios, multiple commands per scenario:
set_host_options -num_processes 2 -load_factor 2 -submit_command {bsub -n 16 -R "rusage[mem=32768]"}
This command doubles the number of workers from 2 to 4 by creating a virtual worker for each real worker. It also doubles the memory allocation to accommodate the virtual workers. No save and restore operations are needed because the 4 workers can accept the 4 scenarios at the same time.
此命令通过为每个真实工作线程创建一个虚拟工作线程,将工作线程数从 2 个增加到 4 个。它还将内存分配加倍以容纳虚拟工作线程。不需要保存和还原操作,因为 4 个工作线程可以同时接受 4 个方案。
• 6 scenarios, multiple commands per scenario:
set_host_options -num_processes 2 -load_factor 2 -submit_command {bsub -n 16 -R "rusage[mem=32768]"}
This command doubles the number of workers from 2 to 4, which reduces the need for save and restore operations. However, it does not eliminate them completely because the 4 workers cannot accept all 6 scenarios at the same time.
此命令将工作线程数从 2 个增加到 4 个,从而减少了对保存和还原操作的需求。但是,它并没有完全消除它们,因为 4 个工人无法同时接受所有 6 个scenario。
To optimize the time benefit, make the total number of workers equal to the number of scenarios (possibly using multiple set_host_options commands) and allocate enough memory on the hosts for the virtual workers.
若要优化时间优势,请使工作线程总数等于scenario数(尽可能使用多个set_host_options命令),并在主机上为虚拟工作线程分配足够的内存。
Creating Scenarios
A scenario is a specific combination of operating conditions and operating modes for a given configuration. Create a scenario with the create_scenario command. For example:
scenario是给定配置的操作条件和操作模式的特定组合。使用 create_scenario 命令创建scenario。例如:
pt_shell> create_scenario -name scen1 -common_data {common_s.pt} -specific_data {specific1.pt}
There are two types of scripts associated with the scenario:
• Scripts that handle common data shared by multiple scenarios and included in the baseline image generation process (see Baseline Image Generation and Storage)
• 用于处理多个scenario共享并包含在baseline image generation process中的公共数据的脚本(请参阅 Baseline Image 生成和存储)
• Scripts that handle specific data applied only to the specified scenario
• 处理仅适用于指定scenario的特定数据的脚本
The master shares common data between groups of scenarios where appropriate, thereby reducing the total runtime for all scenarios. For example, consider the following figure.
在适当的情况下,主节点在scenario组之间共享通用数据,从而减少所有scenario的总运行时间。例如,请考虑下图
v2-63ef0e1beeea16293d288204cc4c0991_720w.jpg
This design is to be analyzed at two process variation extremes, called slow and fast, and two operating modes, called read and write. Thus, the four scenarios to be analyzed are slow-read, slow-write, fast-read, and fast-write.
该设计将在两个过程变化极端,称为慢速和快速,以及两种操作模式,称为读和写下进行分析。因此,要分析的四种场景是慢读、慢写、快读和快写。
The common_all.pt script reads in the design and applies constraints that are common to all scenarios. The common_s.pt script is shared between some, but not all, scenarios. The specific1.pt script directly contributes to specifying conditions of an individual scenario.
common_all.pt 脚本在设计中读取并应用所有scenario通用的约束。common_s.pt 脚本在某些(但不是全部)方案之间共享。specific1.pt 脚本直接用于指定单个方案的条件。
Use the -image option of the create_scenario command to create scenarios from a previously saved scenario image. This image can be generated from a standard PrimeTime session, a session saved from the DMSA master, or a session saved from the current image generated during a DMSA session.
使用 create_scenario 命令的 -image 选项从以前保存的scenario映像创建scenario。此映像可以从标准PrimeTime session、从DMSA主节点保存的session或从DMSA session期间生成的当前映像保存的session生成。
If you need to create voltage scaling groups, do so before you use the link_design command in one of the scripts used to set up a scenario.
如果需要创建电压缩放组,请在用于设置scenario的脚本之一中使用 link_design 命令之前执行此操作。
Removing Scenarios
To remove a scenario previously created with the create_scenario command, use the remove_scenario command:
pt_shell> remove_scenario s1
Specifying the Current Session and Command Focus
To specify the set of scenarios analyze, use the current_session command:
pt_shell> current_session {s2 s3 s4}
The command lists the scenarios to be part of the current session. By default, all the listed scenarios are in command focus, the scenarios currently affected by analysis commands in the master. To further narrow the command focus, use the current_scenario command:
该命令列出了要作为当前session一部分的scenario。默认情况下,所有列出的scenario都处于command focus中,即当前受主服务器中分析命令影响的scenario。若要进一步缩小command focus,请使用 current_scenario 命令:
pt_shell> current_scenario {s3}
This command is useful during interactive analysis. For example, you might want to modify the load on a particular net and then recheck the path timing in just that scenario or a subset of all scenarios in the current session.
此命令在交互式分析期间很有用。例如,您可能希望修改特定网络上的负载,然后仅重新检查该方案中的path timing,或者重新检查当前session中所有scenario的子集
You can restore the command focus to all scenarios in the session as follows:
pt_shell> current_scenario -all
To check the distributed processing setup before you begin DMSA, use the report_multi_scenario_design command, which creates a detailed report about user-defined multi-scenario objects and attributes.
若要在开始 DMSA 之前检查分布式处理设置,请使用 report_multi_scenario_design 命令,该命令将创建有关用户自定义的multi-scenario的object和attributes的详细报告。
Executing Commands Remotely
To explicitly run commands in the worker context, use the remote_execute command. Enclose the list of commands as a Tcl string. Separate the command with semicolons so that they execute one at a time.
若要在工作窗口中显式运行命令,请使用 remote_execute 命令。将命令列表括为 Tcl 字符串。用分号分隔命令,以便它们一次执行一个。
To evaluate subexpressions and variables remotely, generate the command string using curly braces. For example:
若要远程计算子表达式和变量,请使用大括号生成命令字符串。例如:
remote_execute { # variable all_in evaluated at worker
report_timing -from $all_in -to [all_outputs]
}
In this example, the report_timing command is evaluated at the worker, and the all_in variable and the all_outputs expression are evaluated in a worker context.
在此示例中,report_timing命令在工作线程中求值,all_in变量和all_outputs表达式在工作窗口中求值。
To evaluate expressions and variables locally at the master, enclose the command string in quotation marks. All master evaluations must return a string. For example:
若要在主服务器本地计算表达式和变量,请将命令字符串括在引号中。所有主评估都必须返回一个字符串。例如:
remote_execute "
# variable all_out evaluated at master
report_timing -to $all_out
"
In this example, the report_timing command is evaluated at the worker, and the all_out variable is evaluated at the master. Upon issuing the remote_execute command, the master generates tasks for execution in all scenarios in command focus.
在此示例中,report_timing命令在 worker 处计算,all_out 变量在 master 处计算。发出 remote_execute 命令后,主节点会生成任务,以便在command-focus的所有场景中执行。
The following example shows how to use the remote_execute command.
remote_execute {set x 10}
# Send tasks for execution in all scenarios in command focus
set x 20
remote_execute {report_timing -nworst $x}
# Leads to report_timing -nworst 10 getting executed at the workers
remote_execute "report_timing -nworst $x"
# Leads to report_timing -nworst 20 getting executed at the workers
If you use the -pre_commands option, the remote_execute command executes the specified commands before the remote execution command.
如果使用 -pre_commands 选项,那么 remote_execute 命令将在远程执行命令之前执行指定的命令。
remote_execute -pre_commands {cmd1; cmd2; cmd3} "report_timing"
# On the worker host, execute cmd1, cmd2, and cmd3 before
# executing report_timing on the master
If you use the -post_commands option, the listed commands are executed after the commands specified for remote execution.
如果使用 -post_commands 选项,则列出的命令将在为远程执行指定的命令之后执行。
remote_execute -post_commands {cmd1; cmd2; cmd3} "report_timing"
# On the worker host, execute cmd1, cmd2, and cmd3 after
# executing report_timing on the master
You can use the remote_execute command with the -verbose option to return all worker data to the master terminal screen, instead of piping it to the out.log file in the working directory of the DMSA file system hierarchy.
您可以使用带有 -verbose 选项的 remote_execute 命令将所有工作器数据返回到主终端屏幕,而不是将其通过管道传输到 DMSA 文件系统层次结构的工作目录中的 out.log 文件。
The ability to execute netlist editing commands remotely extends to all PrimeTime commands, except when using the following commands:
远程执行网表编辑命令的能力扩展到所有PrimeTime命令,但使用以下命令时除外:
• Explicit save_session (worker only)
• Explicit restore_session
• remove_design
DMSA Batch Mode Script Example
In batch mode, PrimeTime commands are contained in a script invoked when the tool is started. For example,
% pt_shell –multi_scenario –file script.tcl
The following script is a typical multi-scenario master script.
v2-c02d6252567eb0387d108d7c3d559058_720w.jpg v2-95d2beb99e92905ccca84fbae4521507_720w.jpg v2-3b653eff5588aa5626972b6dcd58d7f1_720w.jpg
A task is a self-contained block of work that a worker process executes to perform some function. A worker process starts execution of a task for the specified scenario as soon as it gets the licenses appropriate to that task.
This script performs the following functions:
1. The initial set command sets the search path at the master (used for finding scripts).
2. The set_host_options command specifies the host options for the compute resources.
3. The report_host_usage command generates a detailed report that describes all of the host options that have been set.
4. The start_hosts command starts the host processes specified by the set_host_options command.
5. The create_scenario commands create four scenarios named s1 through s4. Each command specifies the common-data and specific-data scripts for the scenario.
6. The current_session command selects s1 and s2 for the current session, which is also the command focus by default. (You could use the current_scenario command to narrow the focus further.)
7. The remote_execute -pre_commands command begins the task generation process and execution of those tasks in scenarios s1 and s2. The resulting logs contain all the output from the commands that were executed. Because the -pre_commands option is specified, the two source commands are executed before the report_timing command. The command list can contain any valid PrimeTime commands or sourcing of scripts that contain valid PrimeTime commands, excluding commands that alter the netlist or save or restore the session.
7. remote_execute -pre_commands 命令在scenario s1 和 s2 中启动任务生成过程和执行这些任务。生成的日志包含已执行命令的所有输出。由于指定了 -pre_commands 选项,因此在 report_timing 命令之前执行两个源命令。命令列表可以包含任何有效的PrimeTime命令或包含有效PrimeTime命令的脚本来源,不包括更改网表或保存或恢复session的命令。
8. The quit command terminates the worker processes and master processes in an orderly manner and releases the checked-out PrimeTime licenses.
The script writes out the report and log files in the following directory paths (assuming the master pt_shell was launched from the directory /mypath).
• Master command log:
/mypath/pt_shell_command.log
• Errors generated by the workers and returned to the master and merged for reporting:
/mypath/ms_session_1/merged_errors.log
• Worker output log for work done for the session:
/mypath/ms_session_1/default_session/out.log
• Worker output logs generated for s1 and s2:
/mypath/ms_session_1/s1/out.log
/mypath/ms_session_1/s2/out.log
• Worker command logs:
/mypath/ms_session_1/pt_shell_command_log/
platinum1_pt_shell_command.log
/mypath/ms_session_1/pt_shell_command_log/
platinum2_pt_shell_command.log
Baseline Image Generation and Storage
The first task that the master generates and delegates to a worker process is baseline image generation. The baseline image consists of a netlist representation and the common data files for that scenario. In many cases, the same image can be shared by multiple scenarios.
主节点生成并委托给工作进程的第一个任务是baseline image生成。Baseline image由网表表示和该scenario的通用数据文件组成。在许多情况下,同一image可以由多个scenario共享。
Before any other types of tasks can proceed, baseline image generation must be completed successfully. Failure of image generation also results in failure of the multi-scenario analysis for those scenarios that depend on the image.
在继续执行任何其他类型的任务之前,必须成功完成baseline image生成。image生成失败还会导致依赖于image的scenario的multi-scenario失败。
For each scenario in command focus that provides a baseline image, that image is written to the scenario_name/baseline_image directory under the multi-scenario working directory.
对于提供baseline image的command focus中的每个scenario,该image将写入multi-scenario工作目录下的 scenario_name/baseline_image 目录。
After baseline image generation, the master generates and delegates to worker processes the execution of the user-specified commands in the scenarios that are in command focus.
生成baseline image后,主节点在以命令为中心的scenario中生成用户指定的命令的执行,并将其委托给工作进程。
When there are more scenarios than processes available to execute the scenario tasks, the worker process saves a current image for one scenario while it proceeds with task execution for a different scenario. The worker process saves the image in the scenario_name/ current_image directory under the multi-scenario working directory.
当scenario多于可用于执行scenario任务的进程时,工作进程会保存一个scenario的当前image,同时继续执行另一个scenario的任务。工作进程将image保存在multi-scenario工作目录下的 scenario_name/ current_image 目录下。
Host Resource Affinity
Automatic reconfiguring of worker processes to run different tasks in different scenarios can be an expensive operation. The multi-scenario process tries to minimize the swapping out of scenarios, but is subject to the limitations imposed by the number of scenarios in command focus and the number of worker processes added. A significant imbalance of scenarios to worker processes can degrade system performance.
自动重新配置工作进程以在不同scenario中运行不同的任务可能是一项代价高昂的操作。Multi-scenario进程试图最少化scenario的交换,但会受到command focus中的scenario数量和添加的工作进程数量的限制。工作进程和scenario的严重不平衡可能会降低系统性能。
You can achieve optimal performance from DMSA if there is a worker process for every scenario in command focus, a CPU available to execute each worker process, and a license for each feature needed for every worker process. This results in all worker processes executing concurrently, allowing maximum throughput.
如果command focus中的每个scenario都有一个工作进程,一个可用于执行每个工作进程的 CPU,以及每个工作进程所需的每个功能的许可证,则可以从 DMSA 实现最佳性能。这会导致所有工作进程并发执行,从而实现最大吞吐量。
You can optionally assign scenarios an “affinity” for execution on a specified hosts, allowing more efficient allocation of limited computing resources. For example, suppose that you need to run four scenarios, S1 through S4, having different memory and core processing requirements. By default, you need to make four hosts available, each with enough CPU power and memory to accommodate the largest job, as shown in the following figure.
您可以选择为scenario分配在指定主机上执行的“关联性”,从而更有效地分配有限的计算资源。例如,假设您需要运行四个scenario(S1 到 S4),这些scenario具有不同的内存和核心处理要求。默认情况下,您需要提供四台主机,每台主机都具有足够的 CPU 能力和内存来容纳最大的作业,如下图所示
v2-416bee61edc8e205f6ccb1f748a4b443_720w.jpg
You can optionally assign smaller jobs to smaller hosts, and thereby use fewer resources while achieving the same turnaround time, as shown in the following figure.
v2-418d11fe486c6191ba8ec90830bdbed0_720w.jpg
To specify the host resource affinity for scenarios, use commands similar to the following:
set_host_options –name SmallHosts –max_cores 8 –num_processes 2 \
–submit_command {bsub –n 8 –R "rusage[mem=4000] span[ptile=1]"}
set_host_options –name BigHosts –max_cores 16 –num_processes 2 \
–submit_command {bsub –n 16 –R "rusage[mem=8000] span[ptile=1]"}
create_scenario -name S1 –affinity SmallHosts …
create_scenario -name S2 –affinity SmallHosts …
create_scenario -name S3 –affinity BigHosts ...
create_scenario -name S4 –affinity BigHosts ...
To report the scenario affinity settings, use the report_multi_scenario_design -scenario command.
Scenario Variables and Attributes
You can set variables in either the master or any of its workers during DMSA, as described in the following sections:
您可以在 DMSA 期间在主服务器或其任何工作进程中设置变量,如以下各节所述
• Master Context Variables
• Worker Context Variables and Expressions
• Setting Distributed Variables
• Getting Distributed Variable Values
• Merging Distributed Variable Values
• Synchronizing Object Attributes Across Scenarios
Master Context Variables
Here is an example of a master context variable.
set x {"ffa/CP ffb/CP ffc/CP"}
report_timing -from $x
Notice how x is forced to be a string.
Worker Context Variables and Expressions
The master has no knowledge of variables residing on the worker, but can pass variable or expression strings to the worker for remote evaluation. To pass a token to the worker for remote evaluation, use curly braces.
主节点不了解工作线程上的变量,但可以将变量或表达式字符串传递给工作线程进行远程评估。若要将token传递给worker进行远程评估,请使用大括号。
Suppose you have the following scenarios:
• Scenario 1: x has value of ff1/CP
• Scenario 2: x has value of ff2/CP
The report_timing -from {$x} command reports the worst paths from all paths starting at ff1/CP in the context of scenario 1, and all paths starting at ff2/CP in the context of scenario 2.
Setting Distributed Variables
To set variables at the worker level from the master, use the set_distributed_variables command. The variable can be a Tcl variable, collection, array, or list.
To set variables in multiple scenarios from the master, first create a Tcl array of the required values for those scenarios, then use the set_distributed_variables command to distribute the settings to the scenarios.
For example, suppose that you want to set variables x and y at the worker level to different values in two different scenarios. Create an array, indexed by scenario name, containing the values to set in the workers. From the master, execute the following commands:
例如,假设您希望在两个不同的scenario中将工作线程级别的变量 x 和 y 设置为不同的值。创建一个按scenario名称编制索引的数组,其中包含要在工作线程中设置的值。在主节点上,执行以下命令:
pt_shell> array set x {s1 10 s2 0}
pt_shell> array set y {s1 0 s2 15}
pt_shell> set_distributed_variables {x y}
This sends the values specified in the arrays x and y from the master to the workers, and creates the variables x and y at the workers if they do not already exist, and sets them to the specified values.
这会将数组 x 和 y 中指定的值从主服务器发送到工作线程,并在工作线程处创建变量 x 和 y(如果它们尚不存在),并将它们设置为指定的值。
To create and set a single variable, there is no need to create an array. For example,
pt_shell> set my_var 2.5
2.5
pt_shell> set_distributed_variables my_var
Getting Distributed Variable Values
To get the values of variables set in scenarios in command focus, use the get_distributed_variables command. The retrieved values are returned to the master process and stored in a Tcl array, indexed by scenario name.
For example, the following session retrieves slack attribute values from two scenarios:
current_session {scen1 scen2}
remote_execute {
set pin_slack1 [get_attribute -class pin UI/Z slack];
set pin_slack2 [get_attribute -class pin U2/Z slack]
}
get_distributed_variables {
pin_slack1 pin_slack2
}
The session returns two arrays, pin_slack1 and pin_slack2. Indexing pin_slack1(scen1) returns the scalar value of slack at U1/Z in context of scenario scen1. Indexing pin_slack1(scen2) returns the scalar value of slack at U1/Z in the context of scenario scen2.
When retrieving a collection, you need to specify which attributes to retrieve from the collection. For example, the following session retrieves timing paths from two scenarios:
current_session {scen1 scen2}
remote_execute {
set mypaths [get_timing_paths -nworst 100]
}
get_distributed_variables mypaths -attributes (slack)
The session returns an array called mypaths. Indexing mypaths(scen1) yields a collection of the 100 worst paths from scenario scen1, ordered by slack. Similarly, indexing paths(scen2) yields a collection of the 100 worst paths from scen2.
session返回一个名为 mypaths 的数组。索引 mypaths(scen1) 会生成scenario scen1 中 100 个最差路径的集合,按 slack 排序。同样,索引路径 (scen2) 从 scen2 生成 100 个最差路径的集合
When retrieving a collection, you need to specify which attributes to retrieve from the collection. For example, the following session retrieves timing paths from two scenarios:
current_session {scen1 scen2}
remote_execute {
set mypaths [get_timing_paths -nworst 100]
}
get_distributed_variables mypaths -attributes (slack)
The session returns an array called mypaths. Indexing mypaths(scen1) yields a collection of the 100 worst paths from scenario scen1, ordered by slack. Similarly, indexing paths(scen2) yields a collection of the 100 worst paths from scen2.
Merging Distributed Variable Values
By default, the get_distributed_variables command brings back a scenario variable as an array. With the -merge_type option, the values are merged back into a variable instead.
默认情况下,get_distributed_variables 命令将scenario变量作为数组带回。使用 -merge_type 选项时,这些值将合并回变量中。
For example, to keep the minimum (most negative) value of the worst_slack variable from all scenarios:
例如,要保留所有scenario中 worst_slack 变量的最小值(最负值):
pt_shell> get_distributed_variables {worst_slack} -merge_type min
1
pt_shell> echo $worst_slack
-0.7081
You can also merge arrays of one or more dimensions and keep the worst value for each array entry. For example, you have the following array settings in three scenarios:
您还可以合并一个或多个维度的数组,并为每个数组条目保留最差值。例如,在三种方案中具有以下数组设置:
v2-e08bc718241bcec4d116539b65633206_720w.jpg
To report the most negative values, use these commands:
v2-a084edc14b053d1e9dcd4bde723c585b_720w.jpg
The -merge_type option can also merge together lists of values from the scenarios. For example, to obtain a list of all clock names from every scenario, use these commands:
-merge_type 选项还可以将scenario中的值的列表合并在一起。例如,若要从每个scenario中获取所有时钟名称的列表,请使用以下命令:
v2-ba62b0a1a039c4f4fd4b88cd1a3f1136_720w.jpg
The get_distributed_variables command supports the following merge types:
-merge_type min
-merge_type max
-merge_type average
-merge_type sum
-merge_type list
-merge_type unique_list
-merge_type none
You can use the -null_merge_method override option to specify what happens when merging null (empty) entries: ignore the null entry, allow the null value to override other entries, or issue an error message. For details, see the man page for the get_distributed_variables command.
可以使用 -null_merge_method 覆盖选项来指定合并 null(空)条目时发生的情况:忽略 null 条目、允许 null 值覆盖其他条目或发出错误消息。有关详细信息,请参见 get_distributed_variables 命令的手册。
Note that a null data value is different from an undefined value. If array entries are undefined, the -null_merge_method option does not apply; the merging process operates normally on the remaining data values.
请注意,null 数据值不同于未定义的值。如果数组条目未定义,则 -null_merge_method 选项不适用;合并过程对剩余数据值正常运行。
Synchronizing Object Attributes Across Scenarios
You can easily set an object attribute to the same value across all scenarios for a specified object, or for each member of a collection of objects.
您可以轻松地将对象属性设置为相同值给所有scenario中的指定对象或对象集合的每个成员。
For example, to synchronize the dont_touch attribute of cell U72 to true across all scenarios in the current command focus, use the following command:
例如,若要在当前command focus中的所有scenario中将单元格 U72 的 dont_touch 属性同步为 true,请使用以下命令:
pt_shell> synchronize_attribute -class cell {U72} dont_touch
This command works in the DMSA master and can be used to synchronize the settings for a user-defined attribute or the dont_touch application attribute across all scenarios in the current command focus. It cannot be used on application attributes other than dont_touch.
此命令在 DMSA 主服务器中工作,可用于在当前command focus中的所有scenario中同步用户定义属性或dont_touch应用属性的设置。它不能用于除 dont_touch 以外的应用属性。
The -merge_type option can be set to min or max to specify the synchronizing action. The default is max, which synchronizes the attributes to the maximum value found for the attribute across the scenarios, or the last string in alphanumeric order for a string attribute, or true for a Boolean attribute.
可以将 -merge_type 选项设置为 min 或 max 以指定同步操作。默认值为 max,这会将属性同步到跨方案中为该属性找到的最大值,或者将字符串属性的最后一个字符串按字母数字顺序同步,或者将布尔属性同步到 true。
Merged Reporting
Running reporting commands in multi-scenario analysis can generate large amounts of data from each scenario, making it difficult to identify critical information. To help manage these large data sets, the tool supports merged reporting.
在multi-scenario分析中运行报告命令可能会从每个scenario中生成大量数据,从而难以识别关键信息。为了帮助管理这些大型数据集,该工具支持合并报告。
Merged reporting automatically eliminates redundant data and sorts the data in numeric order, allowing you to treat all the scenarios as a single virtual PrimeTime instance. This feature works with these commands:
合并报告会自动消除冗余数据,并按数字顺序对数据进行排序,从而允许您将所有scenario视为单个虚拟PrimeTime实例。此功能适用于以下命令:
get_timing_paths
report_analysis_coverage
report_clock_timing
report_constraint
report_min_pulse_width
report_si_bottleneck
report_timing
To get a merged timing report, issue the report_timing command at the master as you would in an ordinary single-scenario session. This is called executing the command in the master context. The master and worker processes work together to execute the report_timing command in all the scenarios in command focus, producing a final merged report displayed at the master console. Each report shows the scenario name from which it came.
要获取合并的时序报告,请像在普通的single-scenario session中一样在主服务器发出 report_timing 命令。这称为在master context中执行命令。主进程和工作进程协同工作,在command focus中的所有scenario中执行report_timing命令,生成在主控制台上显示的最终合并报告。每个报告都显示它来自的scenario名称
You can optionally specify one or more commands to run in the worker context before or after the merged reporting command. For example,
您可以选择指定一个或多个命令,以便在合并的报告命令之前或之后在worker context中运行。例如
pt_shell> remote_execute -pre_commands {source my_constraints.tcl} \
-post_commands {source my_data_proc.tcl} {report_timing}
This runs the my_constraints.tcl script before and the my_data_proc.tcl script after the report_timing command, all in the worker context.
In situations where there are more scenarios than worker processes, using these options can reduce the amount of swapping scenarios in and out of hosts because all of the scenario-specific tasks (pre-commands, reporting command, and post-commands) are performed together in a single loading of the image onto the host.
在scenario多于工作进程的情况下,使用这些选项可以减少进出主机的交换scenario的数量,因为所有scenario-specific的任务(pre-commands, reporting command, and post-commands)都是在将映像加载到主机上的单个加载中一起执行的
To learn about the reporting commands in DMSA, see
• get_timing_paths
• report_analysis_coverage
• report_clock_timing
• report_constraint
• report_si_bottleneck
• report_timing
get_timing_paths
Using the get_timing_paths command from the master process returns a collection of timing path objects from each scenario, based on the parameters you set in the command. The timing path collection is condensed to meet the requirements you specify (such as using -nworst or -max paths). It returns a merged output collection of the worst timing paths across all scenarios according to the reporting criteria set.
根据您在命令中设置的参数,从主进程中使用 get_timing_paths 命令将返回每个方案的计时路径对象的集合。时序路径集合经过压缩以满足您指定的要求(例如使用 -nworst 或 -max 路径)。它根据设置的报告条件返回所有方案中最差计时路径的合并输出集合。
To find the scenario name for a particular timing path object, query the scenario_name attribute of the timing path object. To specify the attributes to be included in the timing path collection, use the -attributes option with the get_timing_paths command. By default, only the full_name, scenario_name, and object_class attributes are included.
若要查找特定时序路径对象的scenario名称,请查询时序路径对象的 scenario_name 属性。要指定要包含在时序路径集合中的属性,请将 -attributes 选项与 get_timing_paths 命令一起使用。默认情况下,仅包含 full_name、scenario_name 和 object_class 属性。
report_analysis_coverage
The report_analysis_coverage command reports the coverage of timing checks over all active scenarios in the current session. The timing check is defined by the constrained pin, the related pin, and the type of the constraint. The timing check status is reported as:
report_analysis_coverage 命令报告当前session中所有活动scenario的时序检查覆盖率。时序检查由constrained pin、related pin和约束类型定义。时序检查状态报告为:
• Violated if the arrival time does not meet the timing requirements
•如果到达时间不符合时序要求,则违例
• Met if the timing requirements are satisfied
• 如果满足时序要求,则满足
• Untested if the timing check was skipped
• 未测试是否跳过了时序检查
Timing checks are included in the Untested category only if they are untested in all scenarios. If a timing check is tested in at least one scenario, it is reported as either Violated or Met, even if untested in some scenarios. This allows meaningful reporting with significantly different behavior, such as functional versus test modes.
仅当时序检查在所有scenario中均未经过测试时,才会包含在“未测试”类别中。如果在至少一个scenario中检验了时序检查,则该检查将报告为“违例”或“已满足”,即使在某些情况下未进行测试也是如此。这允许具有明显不同行为的有意义的报告,例如functional versus test modes。
The merged report includes the scenario names, as shown in the following example.
v2-d025d51ebaf1e658db3376d9ae95dfb7_720w.jpg
If a timing check is untested in all scenarios in the command focus, instead of a list of scenarios, the all tag is listed in the scenario column; however, if a timing check is tested in at least one scenario, the check is reported as tested, either violated or met, because it has been successfully exercised in some scenarios. The following example shows a report where some scenarios are met and others are untested.
如果在command focus中的所有scenario中都未测试时序检查, instead of a list of scenarios, the all tag is listed in the scenario column;但是,如果在至少一个方案中测试了时序检查,则该检查将报告为已测试,要么违例要么满足,因为它已在某些scenario中成功执行。以下示例显示了一个报表,其中满足某些scenario,而其他scenario未经测试
v2-42b266e895b168daf14a60c2b1377761_720w.jpg
report_clock_timing
The report_clock_timing command executed by the master process returns a merged report that displays the timing attributes of clock networks in a design. The output for the merged report_clock_timing report is similar to the single-core analysis report_clock_timing report; however, the merged report contains a list of scenarios, which are displayed under the Scen column.
主进程执行的 report_clock_timing 命令将返回一个合并报告,其中显示设计中时钟网络的时序属性。合并的 report_clock_timing 报告的输出类似于单核分析report_clock_timing报告;但是,合并的报表包含scenario列表,这些scenario显示在“Scen”列下。
v2-3fa163f15c62dbab80c92b2b18ec5156_720w.jpg
report_constraint
The report_constraint command executed by the master process returns a merged report that displays constraint-based information for all scenarios in command focus. It reports the size of the worst violation and the design object that caused the violation.
主进程执行的 report_constraint 命令将返回一个合并报表,该报表显示command focus中所有scenario的基于约束的信息。它报告最严重的违例的大小和导致违例的设计对象。
The merging process for the report_constraint command for DMSA treats all the scenarios in command focus as if they are a single virtual PrimeTime instance. When an object is constrained in more than one scenario, these constraints are considered duplicates. PrimeTime reports the worst violating instance of each relevant object.
DMSA的report_constraint命令的合并过程将command focus中的所有scenario视为单个虚拟PrimeTime实例。当一个对象在多个scenario中受到约束时,这些约束被视为重复约束。PrimeTime会报告每个相关对象最严重的违例instance。
For example, for a max_capacitance report, PrimeTime reports on the most critical instance for each pin. When there are multiple instances of the same pin across multiple scenarios, PrimeTime retains the worst violator and uses the scenario name as a tie-breaker for consistent results. When you specify the -verbose or -all_violators option, PrimeTime reports the scenario name for the constraint violators.
例如,对于max_capacitance报告,PrimeTime会报告最严重的instance上的每个pin。当同一引脚在多个scenario中存在多个instance时,PrimeTime会保留最严重的违例,并使用scenario名称作为tie-breaker,以获得一致的结果。当您指定 -verbose 或 -all_violators 选项时,PrimeTime 会报告违反约束的scenario名称
v2-010ed41747b0a5687f3a845616c4026d_720w.jpg v2-1087435942fadbea45b3427bc971daea_720w.jpg
If you specify the report_constraint command in summary mode for DMSA when handling setup checks, the report shows the worst setup constraint for all scenarios per group. The hold information displays the sum of the worst total endpoint cost per clock group over all scenarios. Example 6 shows the summary output for the merged DMSA constraint report shown in Example 7.
v2-a3ab7f9952ad82bbab403a7100cd56f5_720w.jpg
The following example shows the output of an all-violators constraint report with the max_capacitance option for a multi-scenario analysis:
v2-d3a64c65321d993d5b2119a6b049d46b_720w.jpg v2-7fafbe52a93cb639798fac8e9dcfcea8_720w.jpg
report_min_pulse_width
To locate the pins with the most critical minimum pulse width violations, use the report_min_pulse_width command. The tool prunes, sorts, and reports the pins with the worst pulse width violations across all scenarios in focus, as shown in the following example.
要找到最小脉冲宽度违例最严重的pin,请使用 report_min_pulse_width 命令。该工具会修剪、排序和报告所有command focus中脉宽违例最严重的pin,如以下示例所示。
v2-8cda8f7e11ed452af56a46beeec0556a_720w.jpg
report_si_bottleneck
To locate nets that are most critical in crosstalk delay, use the report_si_bottleneck command. With it, you need minimal net repair effort to identify and fix most problems. Across all scenarios, sorting and printing is performed with duplicates pruned to leave a unique set of nets across the scenarios.
要找到串扰延迟中最严重的nets,请使用 report_si_bottleneck 命令。有了它,您需要最少的net修复工作量来识别和解决大多数问题。在所有scenario中,排序和打印都是通过修剪重复项来执行的,以便在scenario中留下unique set of nets。
The following example executes the report_si_bottleneck command at the master.
v2-68535e792fd8777c6409ae95346d8410_720w.jpg
report_timing
The report_timing command executed by the master process returns a merged report that eliminates redundant data across scenarios and sorts the data in order of slack, effectively treating the scenarios in the command focus as a single analysis. The merging process allows you to treat all the scenarios in command focus as if they are a single virtual PrimeTime instance. To do this, PrimeTime reports the worst paths from all scenarios while maintaining the limits imposed by the –nworst, -max_paths, and other options of the report_timing command.
主进程执行的 report_timing 命令返回一个合并的报告,该报告消除了跨scenario的冗余数据,并按slack大小对数据进行排序,从而有效地将command focus中的scenario视为单个分析。合并过程允许您将command focus中的所有scenario视为单个虚拟PrimeTime instance。为此,PrimeTime会报告所有scenario中的最差路径,同时保持report_timing命令的–nworst、-max_paths和其他选项施加的限制
When the same path is reported from multiple scenarios, PrimeTime keeps only the most critical instance of that path in the merged report and shows the scenario in which that instance of the path was the most critical. This way, the resulting report is more evenly spread across the design instead of focused on one portion of the design that is critical in all scenarios. To prevent merging of paths, use the -dont_merge_duplicates option of the report_timing command.
当从多个scenario报告同一路径时,PrimeTime在合并的报告中仅保留该路径的最严重的instance,并显示该路径实例最严重的scenario。这样,生成的报告可以更均匀地分布在整个设计中,而不是只关注在所有scenario中都是最严重的design的一小部分。要防止路径合并,请使用 report_timing 命令的 -dont_merge_duplicates 选项。
PrimeTime considers two paths from two scenarios to be instances of the same path if the two paths meet all of the following criteria:
如果两个路径满足以下所有条件,则PrimeTime会将来自两个scenario的两条路径视为同一路径的实例
• Path group
• Sequence of pins along the data portion of the path
• Transitions at every pin along the data portion of the path
• Launch clock
• Capture clock
• Constraint type
The following example shows the merged report_timing command that was issued at the master and a multi-scenario analysis run with two scenarios, func_bc and func_wc:
v2-007cfae3b75cc58bcd26b544a6fbeec0_720w.jpg v2-2fbbb5248e321f7cbbe4351410f980d7_720w.jpg v2-a15e6cd1d3ea5e22707aece008cd185e_720w.jpg
For information about how to use the report_timing options to control the output reports, see
• Standard Option Handling
• Complex Option Handling
Standard Option Handling
All of the options of the report_timing command are available for merged reporting, with a few exceptions (see Limitations of DMSA). Provided collections are not being passed to the options, they can be specified at the master just as in a single scenario session of PrimeTime. For example:
report_timing 命令的所有选项都可用于合并报告,但有一些例外(请参阅 DMSA 的限制)。如果集合未传递给选项,则可以在主节点上指定,就像在PrimeTime的单个scenario session中一样。例如
v2-38597a4a918478342a44fe45b0fa1de3_720w.jpg
As in the remote_execute command, to evaluate subexpressions and variables remotely, generate the options using curly braces. For example:
与 remote_execute 命令一样,要远程计算子表达式和变量,请使用大括号生成选项。例如:
report_timing -from {$all_in} -to {[all_outputs]}
In this example, the report_timing command is a merged reporting command that collates data from the worker and generates a merged report at the master. The all_in variable and the all_outputs expression are evaluated in a worker context.
在此示例中,report_timing 命令是一个合并报告命令,用于整理来自工作线程的数据并在主服务器生成合并报告。all_in 变量和all_outputs表达式在worker context中计算
To evaluate expressions and variables locally at the master, enclose the command string in quotation marks. All master evaluations must return a string. For example,
若要在主服务器本地计算表达式和变量,请将命令字符串括在引号中。所有主评估都必须返回一个字符串。例如
report_timing -to "$all_out"
Use the -pre_commands option so the collection is generated in the same task as the report_timing command is executed. The merged report_timing command at the master then refers to the explicit collection using the name you specified.
使用 -pre_commands 选项,以便在执行 report_timing 命令的同一任务中生成集合。然后,主机上的合并report_timing命令使用您指定的名称的引用显式集合
v2-5394f423a78662ef78018700236b32b5_720w.jpg
The implicit collection is referred to in the merged report_timing command at the master using curly braces around the worker expression. At the worker, the implicit collection is generated and passed to the -from option of the report_timing command. For example:
隐式集合在主节点的合并report_timing命令中使用大括号将工作线程表达式括起来。在工作线程中,将生成隐式集合并将其传递给 report_timing 命令的 -from 选项。例如:
pt_shell> report_timing -from {[get_pins U1/A]}
Complex Option Handling
To allow for evaluating master and workers variables and expressions from the multi-scenario master, the following options have been altered at the multi-scenario master:
为了允许评估主机和worker变量和从multi-scenario的主机的表达式,在multi-scenario主机上更改了以下选项:
v2-af1729dcc85c93ec7b77136917587ca4_720w.jpg
In a single scenario session of PrimeTime, these variables accept a list as an argument; however, in multi-scenario merged reporting, these variables accept a string as an argument. For example:
# Single scenario session of PrimeTime
report_timing -from {ffa ffb} # in this case, the argument
# to -from is a list.
# Multi-scenario merged reporting
report_timing -from {ffa ffb} # in this case, the argument
# to -from is treated as a string.

发表于 2024-1-5 13:48:52 | 显示全部楼层
先生大义
发表于 2024-1-7 12:36:56 | 显示全部楼层
mark
您需要登录后才可以回帖 登录 | 注册

本版积分规则

关闭

站长推荐 上一条 /2 下一条

小黑屋| 关于我们| 联系我们| 在线咨询| 隐私声明| EETOP 创芯网
( 京ICP备:10050787号 京公网安备:11010502037710 )

GMT+8, 2024-5-9 05:53 , Processed in 0.040483 second(s), 9 queries , Gzip On, Redis On.

eetop公众号 创芯大讲堂 创芯人才网
快速回复 返回顶部 返回列表