-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-22404][YARN] Provide an option to use unmanaged AM in yarn-client mode #19616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@devaraj-kavali why is this a WIP? Are you planning to work more on it before asking for feedback? |
|
@vanzin Thanks for looking into this. I thought to verify some scenarios before removing WIP, feedback is welcome anytime. Now I see there are some code conflicts, I will resolve conflicts and remove WIP. |
|
ok to test |
|
Test build #85125 has finished for PR 19616 at commit
|
| private val isClusterMode = sparkConf.get("spark.submit.deployMode", "client") == "cluster" | ||
|
|
||
| private val isClientUnmanagedAMEnabled = | ||
| sparkConf.getBoolean("spark.yarn.un-managed-am", false) && !isClusterMode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a config constant. Also unmanagedAM is more in line with other config names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the config name and also added config constants.
| // UI's environment page. This works for client mode; for cluster mode, this is handled | ||
| // by the AM. | ||
| CACHE_CONFIGS.foreach(sparkConf.remove) | ||
| if (!isClientUnmanagedAMEnabled) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed in the new mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is clearing the classpath entries and leading to this error in Executors.
Error: Could not find or load main class org.apache.spark.executor.CoarseGrainedExecutorBackend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is happening because you're starting the AM after these are removed from the conf. Should probably juggle things around or change how these are provided to the AM, since these configs are super noisy and shouldn't really show up in the UI.
| populateClasspath(args, hadoopConf, sparkConf, env, sparkConf.get(DRIVER_CLASS_PATH)) | ||
| env("SPARK_YARN_STAGING_DIR") = stagingDirPath.toString | ||
| if (isClientUnmanagedAMEnabled) { | ||
| System.setProperty("SPARK_YARN_STAGING_DIR", stagingDirPath.toString) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be propagated some other way? Using system properties is kinda hacky, and makes it dangerous to run another Spark app later in the same JVM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to get from the spark conf and the application id.
| private def startApplicationMasterService(report: ApplicationReport) = { | ||
| // Add AMRMToken to establish connection between RM and AM | ||
| val token = report.getAMRMToken | ||
| val amRMToken: org.apache.hadoop.security.token.Token[AMRMTokenIdentifier] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need to make this copy? Isn't the Token above enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
report.getAMRMToken gives org.apache.hadoop.yarn.api.records.Token type instance, but currentUGI.addToken expects org.apache.hadoop.security.token.Token type instance.
| val currentUGI = UserGroupInformation.getCurrentUser | ||
| currentUGI.addToken(amRMToken) | ||
|
|
||
| System.setProperty( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question about using system properties.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed to set in sparkConf and use the same in ApplicationMaster while getting the containerId.
| * Common application master functionality for Spark on Yarn. | ||
| */ | ||
| private[spark] class ApplicationMaster(args: ApplicationMasterArguments) extends Logging { | ||
| private[spark] class ApplicationMaster(args: ApplicationMasterArguments, sparkConf: SparkConf, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't follow Spark's convention for multi-line arguments.
This also looks a little odd now, because there are conflicting arguments. ApplicationMasterArguments is now only used in cluster mode, and everything else is expected to be provided in the other parameters. So while this is the simpler change, it's also a little ugly.
I don't really have a good suggestion right now, but it's something to think about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made changes to the default constructor and added another constructor. Please check and let me know anything can be done better.
| System.setProperty( | ||
| ApplicationConstants.Environment.CONTAINER_ID.name(), | ||
| ContainerId.newContainerId(report.getCurrentApplicationAttemptId, 1).toString) | ||
| val amArgs = new ApplicationMasterArguments(Array("--arg", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty weird, I'd make this an explicit constructor argument for the AM instead. But if I understand this correctly, this is the address the AM will be connecting back to the driver, right?
It seems like there's an opportunity for better code here, since now they'd both be running in the same process. Like in the cluster mode case, where the AM uses the same RpcEnv instance as the driver (see runDriver()).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added another constructor without ApplicationMasterArguments and takes RpcEnv to use the same instance in AM.
| val amArgs = new ApplicationMasterArguments(Array("--arg", | ||
| sparkConf.get("spark.driver.host") + ":" + sparkConf.get("spark.driver.port"))) | ||
| // Start Application Service in a separate thread and continue with application monitoring | ||
| new Thread() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't you want to keep a reference to this thread and join it at some point, to make sure it really goes away? Should it be a daemon thread instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed it as daemon thread.
|
Test build #87464 has finished for PR 19616 at commit
|
|
Test build #87462 has finished for PR 19616 at commit
|
|
@vanzin Thanks for the review, can you have a look into the updated PR? |
|
Unlikely I'll be able to review this very soon. |
|
ok to test |
|
Test build #93055 has finished for PR 19616 at commit
|
|
Test build #93144 has finished for PR 19616 at commit
|
vanzin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @devaraj-kavali , sorry this got left behind. It needs updating and I need to think some more about the changes to ApplicationMaster.scala, but it's in the right direction.
| // UI's environment page. This works for client mode; for cluster mode, this is handled | ||
| // by the AM. | ||
| CACHE_CONFIGS.foreach(sparkConf.remove) | ||
| if (!isClientUnmanagedAMEnabled) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is happening because you're starting the AM after these are removed from the conf. Should probably juggle things around or change how these are provided to the AM, since these configs are super noisy and shouldn't really show up in the UI.
| def getContainerId: ContainerId = { | ||
| val containerIdString = System.getenv(ApplicationConstants.Environment.CONTAINER_ID.name()) | ||
| def getContainerId(sparkConf: SparkConf): ContainerId = { | ||
| val containerIdString = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
corrected the indentation
| val containerIdString = System.getenv(ApplicationConstants.Environment.CONTAINER_ID.name()) | ||
| def getContainerId(sparkConf: SparkConf): ContainerId = { | ||
| val containerIdString = | ||
| if (System.getenv(ApplicationConstants.Environment.CONTAINER_ID.name()) != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to use sparkConf.getenv.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated to use sparkConf.getenv
| sparkConf.set("spark.yarn.containerId", | ||
| ContainerId.newContainerId(report.getCurrentApplicationAttemptId, 1).toString) | ||
| // Start Application Service in a separate thread and continue with application monitoring | ||
| val amService = new Thread() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added thread name
| // Add AMRMToken to establish connection between RM and AM | ||
| val token = report.getAMRMToken | ||
| val amRMToken: org.apache.hadoop.security.token.Token[AMRMTokenIdentifier] = | ||
| new org.apache.hadoop.security.token.Token[AMRMTokenIdentifier](token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep related calls in the same like (e.g. token.getIdentifier(), new Text(blah))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made the change, please let me know if anything better can be done here
| val yarnConf: YarnConfiguration) | ||
| extends Logging { | ||
|
|
||
| def this(sparkConf: SparkConf, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above constructor for multi-line args style.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed this constructor as part of below comment refactor
| RpcAddress(driverHost, driverPort), | ||
| YarnSchedulerBackend.ENDPOINT_NAME) | ||
| var driverRef : RpcEndpointRef = null | ||
| if (sparkConf.get(YARN_UNMANAGED_AM)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a big fan of this change. Feels like you should have a different method here called runUnmanaged that is called instead of run(), and takes an RpcEnv.
That way you don't need to keep clientRpcEnv at all since it would be local to that method, since nothing else here needs it. In fact even rpcEnv could go away and become a parameter to createAllocator...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactored to a method runUnmanaged, please let me know if anything can be done better.
|
Thanks @vanzin for taking time to look into this, will update it with the changes. |
|
ok to test |
|
Test build #100052 has finished for PR 19616 at commit
|
These configs are getting removed from sparkConf in ApplicationMaster after using. |
|
Test build #100163 has finished for PR 19616 at commit
|
|
@vanzin can you check the updated changes? thanks |
| } | ||
|
|
||
| private def runImpl(): Unit = { | ||
| private def runImpl(opBlock: => Unit): Unit = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are things in this method that don't look right when you think about an unmanaged AM.
e.g., overriding spark.master, spark.ui.port, etc, look wrong.
The handling of app attempts also seems wrong, since with an unmanaged AM you don't have multiple attempts. Even the shutdown hooks seems a bit out of place.
Seems to me it would be easier not to try to use this method for the unmanaged AM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactored this code
| addAmIpFilter(Some(driverRef)) | ||
| createAllocator(driverRef, sparkConf, clientRpcEnv) | ||
|
|
||
| // In client mode the actor will stop the reporter thread. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed this as part of the above comment refactor
| val preserveFiles = sparkConf.get(PRESERVE_STAGING_FILES) | ||
| if (!preserveFiles) { | ||
| stagingDirPath = new Path(System.getenv("SPARK_YARN_STAGING_DIR")) | ||
| var stagingDir = System.getenv("SPARK_YARN_STAGING_DIR") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
val stagingDir = sys.props.get("...").getOrElse { ... }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made the change to pass the stagingDir from Client.scala
| stagingDirPath = new Path(System.getenv("SPARK_YARN_STAGING_DIR")) | ||
| var stagingDir = System.getenv("SPARK_YARN_STAGING_DIR") | ||
| if (stagingDir == null) { | ||
| val appStagingBaseDir = sparkConf.get(STAGING_DIR).map { new Path(_) } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks similar to the logic in Client.scala. Maybe the value calculated there should be plumbed through, instead of adding this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made the change to pass the stagingDir from Client.scala
|
|
||
| if (isClientUnmanagedAMEnabled) { | ||
| // Set Unmanaged AM to true in Application Submission Context | ||
| appContext.setUnmanagedAM(true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
appContext.setUnmanagedAM(isClientUnmanagedAMEnabled)
Which also makes the comment unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
| } | ||
|
|
||
| if (state == YarnApplicationState.ACCEPTED && isClientUnmanagedAMEnabled | ||
| && !amServiceStarted && report.getAMRMToken != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indent one more level
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
| ContainerId.newContainerId(report.getCurrentApplicationAttemptId, 1).toString) | ||
| // Start Application Service in a separate thread and continue with application monitoring | ||
| val amService = new Thread("Unmanaged Application Master Service") { | ||
| override def run(): Unit = new ApplicationMaster(new ApplicationMasterArguments(Array.empty), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a pretty long line. Break it down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated it.
| currentUGI.addToken(amRMToken) | ||
|
|
||
| sparkConf.set("spark.yarn.containerId", | ||
| ContainerId.newContainerId(report.getCurrentApplicationAttemptId, 1).toString) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this name be the same as the first executor created by the app?
I'd rather special-case getContainerId to return some baked-in string when the env variable is not set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made the change to pass the appAttemptId from Client.scala
|
Test build #100328 has finished for PR 19616 at commit
|
| "APPMASTER", sparkConf.get(APP_CALLER_CONTEXT), | ||
| Option(appAttemptId.getApplicationId.toString), None).setCurrentContext() | ||
|
|
||
| // This shutdown hook should run *after* the SparkContext is shut down. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is client mode, so you can't rely on shutdown hooks. You need to explicitly stop this service when the SparkContext is shutdown.
Imagine someone just embeds sc = new SparkContext(); ...; sc.stop() in their app code, but the app itself runs for way longer than the Spark app.
| } | ||
| } | ||
|
|
||
| def runUnmanaged(clientRpcEnv: RpcEnv, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multi-line args start on the next line.
| throw new SparkException("While loop is depleted! This should never happen...") | ||
| } | ||
|
|
||
| private def startApplicationMasterService(report: ApplicationReport) = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
: Unit =
But given you should be explicitly stopping the AM, this should probably return the AM itself.
|
Test build #101002 has finished for PR 19616 at commit
|
|
Test build #101003 has finished for PR 19616 at commit
|
|
Test build #100997 has finished for PR 19616 at commit
|
|
Test build #101277 has finished for PR 19616 at commit
|
|
@vanzin can you review the latest changes when you have some time? thanks |
vanzin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor comments. It would be good to add a test for this in YarnClusterSuite.
| // In cluster mode, do not rely on the disassociated event to exit | ||
| // This avoids potentially reporting incorrect exit codes if the driver fails | ||
| if (!isClusterMode) { | ||
| if (!(isClusterMode || sparkConf.get(YARN_UNMANAGED_AM))) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update comment above?
| private val isClusterMode = sparkConf.get("spark.submit.deployMode", "client") == "cluster" | ||
|
|
||
| private val isClientUnmanagedAMEnabled = sparkConf.get(YARN_UNMANAGED_AM) && !isClusterMode | ||
| private var amServiceStarted = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need this extra flag? Could you just check if appMaster != null?
| private val isClientUnmanagedAMEnabled = sparkConf.get(YARN_UNMANAGED_AM) && !isClusterMode | ||
| private var amServiceStarted = false | ||
| private var appMaster: ApplicationMaster = _ | ||
| private var unManagedAMStagingDirPath: Path = _ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems better to just store this in a variable for all cases. It's recomputed from the conf in a few different places in this class.
|
|
||
| /* Unmanaged AM configuration. */ | ||
|
|
||
| private[spark] val YARN_UNMANAGED_AM = ConfigBuilder("spark.yarn.unmanagedAM") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add .enabled to the config key.
|
Test build #101610 has finished for PR 19616 at commit
|
|
@vanzin can you check the updated changes? thanks |
vanzin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the behavior when the AM fails before the context is stopped?
From the code I see some stuff is printed to the logs and the YARN app is marked as finished. But does the context remain alive? Or should that event cause the context to be stopped?
I'm mostly concerned with how clear it is for the user that the jobs start failing because the context is now unusable.
| finish(FinalApplicationStatus.FAILED, | ||
| ApplicationMaster.EXIT_UNCAUGHT_EXCEPTION, | ||
| "Uncaught exception: " + StringUtils.stringifyException(e)) | ||
| if (!unregistered) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this code needed here? Won't it be called when the client calls stopUnmanaged?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
appMaster.runUnmanaged is running in a daemon thread, if something goes unexpected in appMaster.runUnmanaged then the daemon thread stops and monitor thread will not know about it and continue with the status as ACCEPTED/RUNNING, this code unregisters with RM so that the Client/monitor thread gets the application report status as FAILED and stops the services including sc.
| } | ||
|
|
||
| test("run Spark in yarn-client mode with unmanaged am") { | ||
| testBasicYarnApp(true, Map("spark.yarn.unmanagedAM.enabled" -> "true")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
YARN_UNMANAGED_AM.key
If AM fails before the context is stopped, AM reports the FAILED status to the RM and Client receives the FAILED status as part of monitoring and stops the services including the context. |
|
Test build #101658 has finished for PR 19616 at commit
|
That explains the YARN side of things. What about the Spark side of things? What will the user see? Is it clear to the user that the context is now unusable because the YARN app is not running anymore? Or will they get weird errors because of executors not being allocated and things like that? |
It gives an error log from YarnClientSchedulerBackend that |
|
Sounds good. Merging to master. |
|
Thank you so much @vanzin. |
| // Add log urls | ||
| container.foreach { c => | ||
| sys.env.get("SPARK_USER").foreach { user => | ||
| sys.env.filterKeys(_.endsWith("USER")).foreach { user => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While resolving merge conflict in #23260, I found two observations here:
-
What output we expect when
envhave more than one matching keys? This looks like always leveraging the last key which is indeterministic if there're more than one keys being matched. -
Here the
useris not avaluebut a(key, value)here, so we need to use either key or value (I guess we would like to pickvalue).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @HeartSaVioR for finding and reporting it, It was my mistake, I am sorry for that. I have created PR #23659 to fix the issue.
…ent mode ## What changes were proposed in this pull request? Providing a new configuration "spark.yarn.un-managed-am" (defaults to false) to enable the Unmanaged AM Application in Yarn Client mode which launches the Application Master service as part of the Client. It utilizes the existing code for communicating between the Application Master <-> Task Scheduler for the container requests/allocations/launch, and eliminates these, 1. Allocating and launching the Application Master container 2. Remote Node/Process communication between Application Master <-> Task Scheduler ## How was this patch tested? I verified manually running the applications in yarn-client mode with "spark.yarn.un-managed-am" enabled, and also ensured that there is no impact to the existing execution flows. I would like to hear others feedback/thoughts on this. Closes apache#19616 from devaraj-kavali/SPARK-22404. Authored-by: Devaraj K <[email protected]> Signed-off-by: Marcelo Vanzin <[email protected]>
What changes were proposed in this pull request?
Providing a new configuration "spark.yarn.un-managed-am" (defaults to false) to enable the Unmanaged AM Application in Yarn Client mode which launches the Application Master service as part of the Client. It utilizes the existing code for communicating between the Application Master <-> Task Scheduler for the container requests/allocations/launch, and eliminates these,
How was this patch tested?
I verified manually running the applications in yarn-client mode with "spark.yarn.un-managed-am" enabled, and also ensured that there is no impact to the existing execution flows.
I would like to hear others feedback/thoughts on this.