Skip to content.
|
Skip to navigation
Site Map
Accessibility
Contact
Search Site
only in current section
Advanced Search…
Sections
user
Developer
Personal tools
Log in
Register
You are here:
Home
Nav
Home
Developer
Log in
Login Name
Password
Cookies are not enabled. You must enable cookies before you can log in.
Forgot your password?
New user?
Info
Modified items
All recently modified items, latest first.
Re: Build System Requirements
by Timothy McPhillips, last updated: Oct 20, 2008 05:39 PM
Here's my perspective on the idea of each actor being represented as its own module. First, I think it is going to be a rare thing to develop a single actor in isolation from other code. Whenever I've seen new workflow support developed for a community a large number of new actors had to be created that were meant to work together (the ppod preview release included over 35 new actors, and it only *scratched* the surface of what systematists actually need). It'd be painful to have each of those actors isolated in it's own deep directory tree. The Java packaging mechanism is nice for making a first-order association between actors and developing them in concert. I'd like all those actors to be close to each other in the same directory tree when I'm working on them (and close to the support classes they all depend on). Second, I'm not sure if we really need to continue to employ the current approach of manually creating a directory, moml file, and manifest file for each actor under src/actor (resources/actors in the new module approach) to serve as sources for KAR files at all. KAR files are a nice way to package actors for sharing and then loading into Kepler, but I'd like to see the creation of KAR files via manual editing of text files completely eliminated. Can't we create the KAR files somehow without making these intermediate files by hand (and without individually exporting them from Kepler)? I realize the KAR source files also are used to apply semantic annotations to actors and control where they show up in the Kepler actor library. But ultimately is this really the best place to put such annotations? If folks add annotations (or documentation) to an actor later through some other mechanism (through Kepler, through the Kepler actor repository, etc), then these won't be reflected in the KAR sources and future KAR files built from them. So maybe the actor repository is the place to store annotations, not the KAR 'sources'. Whatever we do, I think we should shoot for automating all of this such that most folks, even actor developers, don't know what goes into making a KAR file at all. My personal preference as an engineer would be the following: When I run Kepler (both in production and during development) all actors (all subclasses of ptolemy.actor.TypedAtomicActor, say) would show up in the Library pane, organized by Java package (the Java packages in which actors are located would comprise the default "ontology" for the actors). The actors also would show up elsewhere in the Library if semantically annotated (so that they can be grouped in ways orthogonal to the Java packaging scheme), but at least they'd be immediatly available to search for and drag-and-drop onto the canvas without doing any work at all. This approach would eliminate an enormous amount of overhead (overhead that I currently experience myself). I like the suggestion that the identification and versioning of actors be automatic as well. Atomic actors already are uniquely defined by their fully qualified class name and their version in whatever source code repository they are stored in. It will be very nice when I no longer have to assign (or even understand) the unique ID assigned to actors or the syntax for describing versions in that ID string!
Re: Build System Requirements
by Chad Berkley, last updated: Oct 20, 2008 11:03 AM
I actually don't think it's as much overhead as you think. Besides the trunk/branches/tags directories, I added very little that wasn't already in the current directory based structure. The versioning of actors has been an issue with Kepler forever so I think this goes a long way to dealing with that. Currently, the actors are each in their own directory, which is similar to the structure that I proposed. They will still have to have the same amount of metadata but will be able to be structured more appropriately (i.e. they can have their own resources with them in a standard directory structure instead of relying on Kepler to load resources). I think this will actually make it easier for people to develop actors because they can be treated as a separate development project from Kepler itself, which is currently not possible. I don't think we're requiring actor development in separate modules. I think it would be highly recommended, but if someone thinks there is a necessity to create a module with multiple actors, I don't think that would be a problem. I think one way to help this process along for devs is to create the actor development kit that we've talked about several times. A simple tool, probably built into the build system, that would create the directory structure and template metadata files needed for a new actor so that you don't have to do it manually.
Re: Build System Requirements
by David Welker, last updated: Oct 17, 2008 01:40 PM
Thanks for your work! It looks good. I basically agree with most of what is in your ideas document. That is important. So, although I am focusing on disagreement here, hopefully you do not get the impression that I do not agree with you on a majority of issues. The first point I would like to bring up is question whether we really want to support includes/excludes of source code from modules. We did this with Ptolemy because it was absolutely necessary, given the monolithic design of Ptolemy and the fact that we simply just did not need large portions of it. Without includes/excludes, the size of Ptolemy would be excessive. In contrast, this may not be necessary with future modules, which should all be rationally limited in size. On the other hand, I can see some advantages right away. For example, if I do not want to use the same version of a jar that is being used by a module and I do not want to use a separate class loader either, it might be nice to exclude that jar and just use my own jar in its place. But is includes/excludes really the best mechanism to do this? The build system is not designed to copy source code around, so it is not clear exactly how you expect includes and excludes to come into play. I suppose that the filesets of java source that is compiled and the classpath could make use of them... Is that what you intend? I am not categorically opposed to this idea, but I certainly would prefer to expect modules that need this build functionality to implement it on their own for now, since I suspect that few if any modules would actually need to use this at this point. On the other hand, if it later turns out that this is actually a desired feature of many modules, then I think it might be worth investing the resources at that point to implement this feature. ---- Second, with respect to the idea that each actor is its own module, one cost associated with that idea is that it may make the amount of meta data that each master module has to maintain excessive. Will master modules have to list all of these modules in modules.txt and put their locations in module-locations.txt? Isn't that a lot of data, with a lot of room for error and maintenance headaches? Isn't it also a bit of extra work for developers to have to separate their actor code from their other code, even if the actor code is not very significant? What if I have a lot of actors that I do not expect to be useful outside of the module I am developing and which uses them? Does it really make sense to store these actors elsewhere? I think that it is precisely this concern which has motivated you to suggest the concept of having actor groups. An actor group can specify a set of actors, including, presumably, their relative priorities and their locations, and a module that needs them can refer to an actor group instead of individual actors. This would certainly go some way to decreasing the amount of increased meta-data that this idea would require us to maintain. At the same time, there would still be increased overhead involved with developing actors under this proposal. For each actor I develop, no matter how trivial, I have to specify a unique location in the repository. I have to go ahead and create that location, along with appropriate tags. I have to update meta-data either in the modules that use my actor, or in an actor group that manages meta-data for a group of related actors. Clearly, to the extent that people are developing many trivial actors (for example, actors that extend existing actors in small ways necessary to a particular domain), this overhead would be a significant part of the work needed to develop the actor. In contrast, to the extent that developers are creating sophisticated actors, these costs will seem relatively minor in comparison to the work to be done. Basically, our view of the desirability of this proposal might hang to some extent on what sorts of actors we expect to be developed. We should keep in mind that this proposal, to the extent that it increases the overhead involved in creating an actor, will give developers an incentive to develop more sophisticated actors that do more, rather than dividing work among many simple actors. These more sophisticated actors may be less desirable to share, because they are more likely to perform domain specific work. In contrast, if I have many somewhat simpler actors that perform identical functionality, it might be that some other project will find one or more of these actors useful, while disregarding the more domain specific instances. On the other hand, one could imagine that more sophisticated actors would be more desirable in some contexts, in that you take one actor and it does a lot for you. Also, their might be some advantage in the flexibility of using separately versioned actors. (And also some increased complexity as well.) Anyway, my point isn't that more sophisticated actors are good or bad, but instead to point out that to the extent that you increase the overhead of creating an actor, as you would when you require them to be stored in their own modules, you are creating an incentive (which will be overcome in some instances) to create fewer and more complex actors. I am not sure whether we should prefer more complex or simpler actors, but I do think that it is an unfortunately side effect of the proposal that it would increase the relative costs of creating one type versus the other... It should be pointed out that if developers want to have only one actor in a module, nothing in the current build system would stop them. So, what we are proposing here isn't the option to create actors in their own module, it is the requirement to create them in their own module. To the extent that benefits arise from seperate versioning of individual actors or suites of related actors, developers are already capable of realizing those benefits by choosing to group actors into modules appropriately. So, the question arises, what benefits exactly arise from this proposal? It seems to me that there are costs. Developers who don't really care having to spend extra work creating seperate modules along with associated meta-data. Developers having to update and debug problems that arise when this meta-data is incorrectly specified, or updates to the code make it obsolete. These are not insignificant costs. I can imagine one benefit, and it has already been mentioned. Perhaps the LSID could somehow be automatically generated from the repository location and version of the actor, which is guaranteed to be unique. First, I am not sure that this is the best way to generate LSIDs are not. There may be alternatives that do not impose these costs on developers. Second, I am not sure that we cannot get this benefit even without the requirement that individual actors be stored in their own module. After all, each actor does still have a unique location, even if it is not in its own module (that unique location consists of the module that it is stored in - which is likely to be stored, plus the actor name). Each actor also has a unique SVN revision associated with the last change to that actor. So, I am not sure that we could not achieve this benefit without requirements regarding the storage of actors. In another post, I would like to address your proposed repository structure...
Re: Build System Requirements
by Chad Berkley, last updated: Oct 16, 2008 08:36 PM
I put together a document that is still a work in progress, but I think it outlines a lot of the "how" or at least hints at it. I also outlined what I think is a good layout for the repository. I tried to note where I thought items met specific requirements in the requirements document. Could everyone please take a look and give me some feedback? https://dev.kepler-project.org/developers/teams/build/systems/build-system/build-system-ideas Post any ideas or changes you have in mind back here for consideration by the group. thanks, chad
Build System Requirements
by Chad Berkley, last updated: Oct 13, 2008 04:46 PM
Bugzilla
by Timothy McPhillips, last updated: Oct 09, 2008 10:59 AM
I'm wondering how we should be using bugzilla in light of our new project organizational structure. Do we want to add more "Components" to choose from in bugzilla reflecting, say, the products of the various teams? There is already a "build system" component, which is great. What about a "framework" component, representing bugs (and feature requests) expected to be most relevant to the Framework Team? And what about bugs reported for various extension modules? I'm currently working on making the comad implementation of the First Provenance Challenge available to the community as a new module. As I'm doing this I'm running into things I'd like to change both in that module and in the comad module (some are "bugs", others are "improvements"). How should such "bugs" be categorized in bugzilla?
Bugzilla
by Timothy McPhillips, last updated: Oct 09, 2008 10:59 AM
Re: The new build system and NMI
by David Welker, last updated: Oct 08, 2008 04:54 PM
Sounds great. I will definitely be in contact with Chad on these issues soon.
The new build system and NMI
by David Welker, last updated: Oct 08, 2008 04:54 PM
Re: The new build system and NMI
by Matthew Jones, last updated: Oct 08, 2008 02:48 PM
David, I saw your IRC note asking where documentation on NMI is. There is extensive NMI documentation at http://nmi.cs.wisc.edu. Our current Kepler NMI build is located in the kepler/trunk/build-nmi subdirectory, and mostly consists of the needed configuration files. NMI is simple on the surface, but in the process of developing our build I had to work around a lot of NMI bugs (and we're still experiencing some). So I would highly recommend that you work with Chad and me to implement this, rather than doing it independently. Chad has been working with NMI for the last several months trying to work out the subtle problems with the current build, and I think he would be well-suited to help transition the proposed extension build to use this system as well. Chad has the user info to log into NMI for Kepler as well.
The new build system and NMI
by David Welker, last updated: Oct 08, 2008 01:24 PM
I have discussed it with Timothy, and we are in agreement that now would be a good time to explore adding NMI-support to the new build system. Briefly, here is the the tentative vision. For each master module (including vanilla-trunk) there should be the possibility of uploading the appropriate modules to NMI and running a test suite that is appropriate for that particular configuration of modules. I think there are several issues that need to be addressed. First and most basically, what is the best way to get the modules onto NMI machines? Second, how will the appropriate modules be selected for transport? Clearly, whatever approach is used, this will have to make use of the module-locations.txt file associated with a particular distribution. Third, how will one designate that they want to receive reports for the NMI build of a particular master module? Clearly, not everyone is going to be interested in the NMI reports for all the master modules that are tested, but they may be interested in a subset. For example, someone working in phylogenetics might be interested in the NMI reports associated with the ppod master module, but they are not necessarily going to be interested in the results of a master module that is being developed by a graduate student whose research they are not familiar with and who they do not know. Fourth, not everyone will be interested in NMI reports for the master module they are working on. How shall we designate that some master modules will be involved but others will not? Fifth, it will be appropriate to run only a subset of the various tests suites that have been developed for a particular master module, but it is also possible that different master modules will want to share some test suites that exercise common functionality. How shall the test suites that should be used for a particular master module be specified? How should sharing of test suites between master modules work? If anyone has any thoughts on these questions, I would love to hear them. Basically, this is the subject that I am exploring right now. I hope to have something implemented soon.
Re: Build System Requirements
by Matthew Jones, last updated: Oct 07, 2008 04:34 PM
Chaq started and I updated a new requirements and design goals document for the build system. I think its still not complete yet, but its getting closer. https://dev.kepler-project.org/developers/teams/build/systems/build-system/build-system-requirements
Build System
by Chad Berkley, last updated: Oct 06, 2008 04:31 PM
The Kepler build system
Relationship between Web UI and Kepler Web Service
by Jianwu Wang, last updated: Oct 06, 2008 02:02 PM
We implemented a Kepler Web service which may be useful for the WebUI group. You can access the Web service at https://code.kepler-project.org/code/kepler/modules/webservice/trunk. The Web service includes two sets of operations: The first set of operations can execute Kepler workflows and get results from the outputs of the operations. It is suitable for short-term execution. executeByAttach executeByAttachWithPara executeByURI executeByURIWithPara executeByContent executeByContentWithPara The second set of operations can firstly start workflow execution, monitor its status, steer its execution, and get results at last. It is suitable for long-term execution. startExeByURI startExeByURIWithPara startExeByAttach startExeByAttachWithPara startExeByContent startExeByContentWithPara getResults getResultsByAttach getExecutionStatus resumeExecution stopExecution pauseExecution These operations are similar to the message to "job manager" of the document "Requirements for Kepler web client support". So I think the Web service is useful for WebUI group. One thing I don't get is the 'input/output data manager'. Are they used to prepare input data and distribute output data, or just specify the resource of input data and the target of output data? The Web service can be executed with parameter configuration, which means the data resource and target can be specified in this way.
typo in devel forum URLs
by Daniel Crawl, last updated: Oct 06, 2008 01:22 PM
Screenshots for Instructions
by Aaron Schultz, last updated: Oct 01, 2008 08:17 PM
A place to upload screenshots that are used in instruction pages.
Re: Build System Requirements
by David Welker, last updated: Oct 01, 2008 03:36 PM
I agree with Matt 100% on this requirement. We need to redesign the cache so that different distributions use different caches, that way you can switch between distributions without error.
Re: Build System Requirements
by Matthew Jones, last updated: Sep 30, 2008 06:11 PM
Glad to hear it worked. From your note, I'd like to derive a new requirement for the build and runtime system: Requirement: Kepler should not need or promote the deletion of the .kepler directory in order to install and switch between actors, extensions, or different versions of those, as this will potentially delete items that the user chose to import into their system cache.
Re: Build System Requirements
by Derik Barseghian, last updated: Sep 30, 2008 02:38 PM
Thanks, this worked for me, with the caveats of using: ant change-to -Dmodule=vanilla-1.0 (as noted on the instruction page, instead of just "vanilla") And I deleted .kepler after this command so the ppod menu items go away and the regular return (see step9).
Design Survey: Pod 4
by Kirsten Menger-Anderson, last updated: Sep 29, 2008 03:08 PM
« Previous 20 items
Next 20 items »
1
…
30
31
32
33
34
35
36
…
46
News
Kepler talks at EIM 2008
Sep 15, 2008