Personal tools
You are here: Home Developer Developer Forum Build and Release Team Build System Requirements

Build System Requirements

Up to Build and Release Team

Build System Requirements

Posted by Chad Berkley at September 16. 2008

Hi,

I'd like to start a thread to identify and discuss the major requirements for the build system as we move forward.  I know we've been discussing this a lot and we have a bunch of different documents spread out with requirements, but I thought it would be useful to get all of this into one place where we can talk about it.  The current documentation we have for requirements and use cases is here: http://www.kepler-project.org/Wiki.jsp?page=ProposedBuildSystemUseCases and here: http://www.kepler-project.org/Wiki.jsp?page=CurrentBuildSystemUseCases

We need to figure out soon exactly which tools we will be using and we need to be able to identify whether those tools meet our requirements.  When we're satisfied that we've identified everything, I'll put this into a more formal document and link it here.   I'll start off here with some stuff I know we've been discussing for some time.

  • Scripted build is a must.  Build must not rely upon any GUI based build tool that cannot be run from the command line.
  • Build must allow for easy extension integration
  • Build system must support tools required by and decided upon by the framework team
  • Must be able to build installers from the build system
  • Must be able to quickly build the core and extensions for iterative development
 
That's just a few to start.  I know that David has been working on a Build system for some time.  i'd like to see it work and be able to evaluate it.  Maybe we could do a conf. call sometime this week and David could demo this system to the group.  David, is your work checked in somewhere where I could try it?  Do you have any documentation on it that you could post here so I could more easily evaluate it?
 
chad
 

Re: Build System Requirements

Posted by David Welker at September 16. 2008

Hi Chad,

Right now, I am finishing up Netbeans support for the build. After that point, I would like to begin the non-trivial task of documenting it. The build is in fact checked in at

https://code.kepler-project.org/code/kepler/kepler.build/branches/1.0

Please note, you need version 1.7.1 of Ant to run the build system. Anyway, taking a quick crack at it to run the most basic version of Kepler is not too hard. Later, I will produce more complete documentation. Here is a quick and dirty set of steps you can use to try it out.

(1) Checkout the code above.

(2) Make sure you have 1.7.1 of ant installed.

(3) Navigate to the build-area/ folder.

(4) Type ant get. The build downloads kepler, ptolemy, as well as a required extension known as loader. This should take something like 6.5 minutes.

(5) Type ant run. Vanilla kepler runs.

----------

Please note, there are some bugs having to do with the cache that may frustrate this simple set of instructions. That bug did not manifest itself until somewhat recently because I have been using the ppod extension, and the bug does not seem to manifest itself in that context.

Anyway, whether or not step (5) works, you can try the following.

(6) Type ant change-to -Dmod=ppod. This download additional modules needed by ppod and changes configuration information.

(7) Type ant run. Now, the ppod distribution of Kepler should run.

(8) Type ant change-to -Dmod=vanilla. This should change back the configuration information to run vanilla ppod.

(9) Type ant run. Plain Kepler should run. With one caveat. Since it is using the cache generated by ppod, it will look a little different. One thing we need to do is restructure that cache so that you can seemlessly change back and forth between extensions.


Anyway, after adding support for netbeans and working out a few bugs, I will produce some real documentation.

 

 

Re: Build System Requirements

Posted by Timothy McPhillips at September 18. 2008

Hi Chad,  integrating all of the build system use cases and requirements that are floating around and putting them in one document sounds great.  It will help us be more articulate about what we're trying to achieve and what constraints we face.  I like how the Extension Framework requirements document (especially the non-functional requirements section) is turning out.

Do you want to start the requirements document for the build system?  I'm also happy to do so.

By the way, you may have noticed I started an  overview document for providing background to the ongoing work on supporting development of extensions.  This document could be mined for more formal requirements as well.

Tim

Re: Build System Requirements

Posted by Derik Barseghian at September 19. 2008

Hi David,

I gave this a try with os X 10.5.5 with my default ant 1.7.0 and then started the procedure over completely using 1.7.1, but received this failure both times:

 

<code>

nceas-macbook05:build-area derik$ ~/code/ant_1.7.1/apache-ant-1.7.1/bin/ant run

Buildfile: build.xml

 

compile:

  [compile] Compiling A...

 

BUILD FAILED

/Users/derik/dev3/1.0/build-area/build.xml:15: srcdir "/Users/derik/dev3/1.0/A/src/main/java" does not exist!

 

Total time: 0 seconds

</code>
 
Derik

 

Previously David Welker wrote:

Hi Chad,

Right now, I am finishing up Netbeans support for the build. After that point, I would like to begin the non-trivial task of documenting it. The build is in fact checked in at

https://code.kepler-project.org/code/kepler/kepler.build/branches/1.0

Please note, you need version 1.7.1 of Ant to run the build system. Anyway, taking a quick crack at it to run the most basic version of Kepler is not too hard. Later, I will produce more complete documentation. Here is a quick and dirty set of steps you can use to try it out.

(1) Checkout the code above.

(2) Make sure you have 1.7.1 of ant installed.

(3) Navigate to the build-area/ folder.

(4) Type ant get. The build downloads kepler, ptolemy, as well as a required extension known as loader. This should take something like 6.5 minutes.

(5) Type ant run. Vanilla kepler runs.

----------

Please note, there are some bugs having to do with the cache that may frustrate this simple set of instructions. That bug did not manifest itself until somewhat recently because I have been using the ppod extension, and the bug does not seem to manifest itself in that context.

Anyway, whether or not step (5) works, you can try the following.

(6) Type ant change-to -Dmod=ppod. This download additional modules needed by ppod and changes configuration information.

(7) Type ant run. Now, the ppod distribution of Kepler should run.

(8) Type ant change-to -Dmod=vanilla. This should change back the configuration information to run vanilla ppod.

(9) Type ant run. Plain Kepler should run. With one caveat. Since it is using the cache generated by ppod, it will look a little different. One thing we need to do is restructure that cache so that you can seemlessly change back and forth between extensions.


Anyway, after adding support for netbeans and working out a few bugs, I will produce some real documentation.

 

 

 

Re: Build System Requirements

Posted by David Welker at September 24. 2008

Hi Derik,

Sorry for the late reply, I just noticed this request. This issue should be resolved by now. The problem you are experiencing is because I accidentally checked in an incorrect version of modules.txt that I was using for testing purposes. I have since fixed that. This should be resolved if you update your build and try again.

Re: Build System Requirements

Posted by Derik Barseghian at September 26. 2008

Hey David,

This works through step 5 for me now, but after step 7 regular kepler launches for me, instead of the ppod version.

Derik

Re: Build System Requirements

Posted by David Welker at September 29. 2008

Hi Derik,

Sorry for not getting back to you sooner. I was out on Thursday and Friday.

Well, I have reproduced this error. It turns out that the problem is that the required variable to be set is not longer called mod, but is instead called module.

So, if you use ant change-to -Dmodule=ppod instead of ant-change-to -Dmod=ppod, it should work.

Also, if you are interested, you can update your build. Eclipse support has now been improved. Also, if you fail to specify -Dmodule in commands where it is now required (i.e. get, change-to, and clean) then a helpful error message is now printed. If you type ant-change-to -Dmod=ppod now in the build, it will tell you that this is wrong.

mod is fewer letters to type than module, but I think module is much clearer.

Finally, you might want to keep track of the following documentation that I am building up. Supporting users like you on the build is now my highest priority, but I am also working on documentation and bug fixes whenever I get the chance. What that means is that this documentation will be in a state of evolution and there may be changes as we go along.


Anyway, you can look at that documention by following the following link.

Re: Build System Requirements

Posted by Derik Barseghian at September 30. 2008

Thanks, this worked for me, with the caveats of using:

ant change-to -Dmodule=vanilla-1.0 (as noted on the instruction page, instead of just "vanilla")

And I deleted .kepler after this command so the ppod menu items go away and the regular return (see step9).

Re: Build System Requirements

Posted by Matthew Jones at September 30. 2008

Glad to hear it worked.  From your note, I'd like to derive a new requirement for the build and runtime system:

Requirement: Kepler should not need or promote the deletion of the .kepler directory in order to install and switch between actors, extensions, or different versions of those, as this will potentially delete items that the user chose to import into their system cache.

Re: Build System Requirements

Posted by David Welker at October 01. 2008

I agree with Matt 100% on this requirement. We need to redesign the cache so that different distributions use different caches, that way you can switch between distributions without error.

Re: Build System Requirements

Posted by Matthew Jones at October 07. 2008

Chaq started and I updated a new requirements and design goals document for the build system.  I think its still not complete yet, but its getting closer.

 

https://dev.kepler-project.org/developers/teams/build/systems/build-system/build-system-requirements

Re: Build System Requirements

Posted by Chad Berkley at October 16. 2008

I put together a document that is still a work in progress, but I think it outlines a lot of the "how" or at least hints at it.  I also outlined what I think is a good layout for the repository.  I tried to note where I thought items met specific requirements in the requirements document.  Could everyone please take a look and give me some feedback?  

https://dev.kepler-project.org/developers/teams/build/systems/build-system/build-system-ideas

Post any ideas or changes you have in mind back here for consideration by the group.  

thanks,

chad

 

Re: Build System Requirements

Posted by David Welker at October 17. 2008

Thanks for your work! It looks good.

I basically agree with most of what is in your ideas document. That is important. So, although I am focusing on disagreement here, hopefully you do not get the impression that I do not agree with you on a majority of issues.

The first point I would like to bring up is question whether we really want to support includes/excludes of source code from modules. We did this with Ptolemy because it was absolutely necessary, given the monolithic design of Ptolemy and the fact that we simply just did not need large portions of it. Without includes/excludes, the size of Ptolemy would be excessive. In contrast, this may not be necessary with future modules, which should all be rationally limited in size. On the other hand, I can see some advantages right away. For example, if I do not want to use the same version of a jar that is being used by a module and I do not want to use a separate class loader either, it might be nice to exclude that jar and just use my own jar in its place. But is includes/excludes really the best mechanism to do this? The build system is not designed to copy source code around, so it is not clear exactly how you expect includes and excludes to come into play. I suppose that the filesets of java source that is compiled and the classpath could make use of them... Is that what you intend?

I am not categorically opposed to this idea, but I certainly would prefer to expect modules that need this build functionality to implement it on their own for now, since I suspect that few if any modules would actually need to use this at this point. On the other hand, if it later turns out that this is actually a desired feature of many modules, then I think it might be worth investing the resources at that point to implement this feature.

----

Second, with respect to the idea that each actor is its own module, one cost associated with that idea is that it may make the amount of meta data that each master module has to maintain excessive. Will master modules have to list all of these modules in modules.txt and put their locations in module-locations.txt? Isn't that a lot of data, with a lot of room for error and maintenance headaches? Isn't it also a bit of extra work for developers to have to separate their actor code from their other code, even if the actor code is not very significant? What if I have a lot of actors that I do not expect to be useful outside of the module I am developing and which uses them? Does it really make sense to store these actors elsewhere?

I think that it is precisely this concern which has motivated you to suggest the concept of having actor groups. An actor group can specify a set of actors, including, presumably, their relative priorities and their locations, and a module that needs them can refer to an actor group instead of individual actors. This would certainly go some way to decreasing the amount of increased meta-data that this idea would require us to maintain. At the same time, there would still be increased overhead involved with developing actors under this proposal. For each actor I develop, no matter how trivial, I have to specify a unique location in the repository. I have to go ahead and create that location, along with appropriate tags. I have to update meta-data either in the modules that use my actor, or in an actor group that manages meta-data for a group of related actors.

Clearly, to the extent that people are developing many trivial actors (for example, actors that extend existing actors in small ways necessary to a particular domain), this overhead would be a significant part of the work needed to develop the actor. In contrast, to the extent that developers are creating sophisticated actors, these costs will seem relatively minor in comparison to the work to be done. Basically, our view of the desirability of this proposal might hang to some extent on what sorts of actors we expect to be developed. We should keep in mind that this proposal, to the extent that it increases the overhead involved in creating an actor, will give developers an incentive to develop more sophisticated actors that do more, rather than dividing work among many simple actors. These more sophisticated actors may be less desirable to share, because they are more likely to perform domain specific work. In contrast, if I have many somewhat simpler actors that perform identical functionality, it might be that some other project will find one or more of these actors useful, while disregarding the more domain specific instances. On the other hand, one could imagine that more sophisticated actors would be more desirable in some contexts, in that you take one actor and it does a lot for you. Also, their might be some advantage in the flexibility of using separately versioned actors. (And also some increased complexity as well.) Anyway, my point isn't that more sophisticated actors are good or bad, but instead to point out that to the extent that you increase the overhead of creating an actor, as you would when you require them to be stored in their own modules, you are creating an incentive (which will be overcome in some instances) to create fewer and more complex actors. I am not sure whether we should prefer more complex or simpler actors, but I do think that it is an unfortunately side effect of the proposal that it would increase the relative costs of creating one type versus the other...

It should be pointed out that if developers want to have only one actor in a module, nothing in the current build system would stop them. So, what we are proposing here isn't the option to create actors in their own module, it is the requirement to create them in their own module. To the extent that benefits arise from seperate versioning of individual actors or suites of related actors, developers are already capable of realizing those benefits by choosing to group actors into modules appropriately. So, the question arises, what benefits exactly arise from this proposal? It seems to me that there are costs. Developers who don't really care having to spend extra work creating seperate modules along with associated meta-data. Developers having to update and debug problems that arise when this meta-data is incorrectly specified, or updates to the code make it obsolete. These are not insignificant costs.

I can imagine one benefit, and it has already been mentioned. Perhaps the LSID could somehow be automatically generated from the repository location and version of the actor, which is guaranteed to be unique. First, I am not sure that this is the best way to generate LSIDs are not. There may be alternatives that do not impose these costs on developers. Second, I am not sure that we cannot get this benefit even without the requirement that individual actors be stored in their own module. After all, each actor does still have a unique location, even if it is not in its own module (that unique location consists of the module that it is stored in - which is likely to be stored, plus the actor name). Each actor also has a unique SVN revision associated with the last change to that actor. So, I am not sure that we could not achieve this benefit without requirements regarding the storage of actors.

In another post, I would like to address your proposed repository structure...

 

Re: Build System Requirements

Posted by Chad Berkley at October 20. 2008

I actually don't think it's as much overhead as you think.  Besides the trunk/branches/tags directories, I added very little that wasn't already in the current directory based structure.  

The versioning of actors has been an issue with Kepler forever so I think this goes a long way to dealing with that.  Currently, the actors are each in their own directory, which is similar to the structure that I proposed.  They will still have to have the same amount of metadata but will be able to be structured more appropriately (i.e. they can have their own resources with them in a standard directory structure instead of relying on Kepler to load resources).  I think this will actually make it easier for people to develop actors because they can be treated as a separate development project from Kepler itself, which is currently not possible.  

I don't think we're requiring actor development in separate modules.  I think it would be highly recommended, but if someone thinks there is a necessity to create a module with multiple actors, I don't think that would be a problem.  

I think one way to help this process along for devs is to create the actor development kit that we've talked about several times.  A simple tool, probably built into the build system, that would create the directory structure and template metadata files needed for a new actor so that you don't have to do it manually.  

Re: Build System Requirements

Posted by Timothy McPhillips at October 20. 2008

Here's my perspective on the idea of each actor being represented as its own module. 

First, I think it is going to be a rare thing to develop a single actor in isolation from other code.  Whenever I've seen new workflow support developed for a community a large number of new actors had to be created that were meant to work together (the ppod preview release included over 35 new actors, and it only *scratched* the surface of what systematists actually need).  It'd be painful to have each of those actors isolated in it's own deep directory tree.  The Java packaging mechanism is nice for making a first-order association between actors and developing them in concert.  I'd like all those actors to be close to each other in the same directory tree when I'm working on them (and close to the support classes they all depend on).

Second, I'm not sure if we really need to continue to employ the current approach of manually creating a directory, moml file, and manifest file for each actor under src/actor  (resources/actors in the new module approach) to serve as sources for KAR files at all.  KAR files are a nice way to package actors for sharing and then loading into Kepler, but I'd like to see the creation of KAR files via manual editing of text files completely eliminated.  Can't we create the KAR files somehow without making these intermediate files by hand (and without individually exporting them from Kepler)?

I realize the KAR source files also are used to apply semantic annotations to actors and control where they show up in the Kepler actor library.  But ultimately is this really the best place to put such annotations?  If folks add annotations (or documentation) to an actor later through some other mechanism (through Kepler, through the Kepler actor repository, etc), then these won't be reflected in the KAR sources and future KAR files built from them.  So maybe the actor repository is the place to store annotations, not the KAR 'sources'.  Whatever we do, I think we should shoot for automating all of this such that most folks, even actor developers, don't know what goes into making a KAR file at all. 

My personal preference as an engineer would be the following:  When I run Kepler (both in production and during development) all actors (all subclasses of ptolemy.actor.TypedAtomicActor, say) would show up in the Library pane, organized by Java package (the Java packages in which actors are located would comprise the default "ontology" for the actors).  The actors also would show up elsewhere in the Library if semantically annotated (so that they can be grouped in ways orthogonal to the Java packaging scheme), but at least they'd be immediatly available to search for and drag-and-drop onto the canvas without doing any work at all.  This approach would eliminate an enormous amount of overhead (overhead that I currently experience myself).

I like the suggestion that the identification and versioning of actors be automatic as well.  Atomic actors already are uniquely defined by their fully qualified class name and their version in whatever source code repository they are stored in.  It will be very nice when I no longer have to assign (or even understand) the unique ID assigned to actors or the syntax for describing versions in that ID string!

Re: Build System Requirements

Posted by Chad Berkley at October 21. 2008

I have to disagree with you that most people will develop many actors.  From my experience, most people have developed 1-5 actors in small blocks of development.  I think developing 35 actors is the exception. That having been said, I fully agree with you that the current system is too hard to deal with.  I really don't want people hand editing text files.  I actually never said that we would *require* each actor to be in a separate module (though, I think it was implied that I did say that).  In fact, I see no reason why a group of related actors couldn't be in the same module.  

The editing of the actor metadata files by hand is obviously not optimal.  I tried to address this above where I called for going through with the idea of an actor development kit that would create this environment and make it easy to bootstrap.  The issue here is that ptolemy has many different ideas of what an actor is.  It can be a java object.  It can be an XML object.  It can have ports/parameters declared in java.    It can have ports/parameters declared in the xml.  When Shawn and I designed the actor metadata language, we were trying to eliminate all of that ambiguity and get everything declared in one place, hence the actor metadata file.  If we want to change or eliminate that, it is going to take some major reworking of the way Kepler loads actors and displays them to users.

The semantic annotations are stored with the actor because they describe the actor.  Storing them externally to the actor would create ambiguity.  Storing them in the repository assumes that whenever you use Kepler, you have an internet connection.  Many of our users do not have this luxury.  In the past, the organization of the actor repository was done in a separate file which quickly turned into a huge mess when all 215 actors or more were assigned there.  The file was static and required commit privileges to the kepler configs to change.  I think allowing actor authors to edit their own annotations with the actor is a much more concise, organized way to do it.  It separates the the content from the structure of the library, which I think is also a good thing.

Organizing actors by their classname would be great for java programmers like ourselves.  Unfortunately, I don't think geologists and ecologists would think this is such a great organization technique.  This is why we designed the ontology system so that other domains could organize how they saw fit.  We've been actively trying to hide things like java classes from Kepler users as to not alienate our domain scientist user base.  Having a default sort order like this would turn off many ecologists.  I think this might be nice as an option, but we should be able to turn it off for releases to domain scientists. 

The way I think of this, it's more like giving an actor developer their own little piece of space to develop in.  Everything is self-contained and it has a standard directory structure.  For the default set of actors, I could write a script to transfer them from their current directory structure to this new structure in about 10 minutes.  I think having the extra overhead and directory structure is worth it to keep the actors modularized and not create special cases to deal with later.

Re: Build System Requirements

Posted by Timothy McPhillips at October 21. 2008

Good points, Chad.  What I'm questioning here is not the need for an actor metadata language, associating metadata with actors, or the approach that Kepler takes in loading and displaying actors based on this metadata.  What I'm confused about is the need for version-controlled source files for KAR files.  Is the author of an actor the authority of the semantic annotations that are assigned to it, what ports the actors can have, what parameter values are assigned to it?  I thought anyone could customize an actor and save an updated KAR file.  But then don't these variant KAR files lack corresponding source files in the repository?  It seems to me that we can't have version-controlled (via SVN, I mean) sources for every KAR file out there and that might be usefully shared.  Why then do we expect version controlled KAR sources in the first place?  Do you see why I'm confused?  Can't we bootstrap to the first, barebones KAR file for an actor directly, without using a version-controlled directory of source files?  If we could, then creating a new actor would require simply compiling a new Java source file for the actor.

I agree the Java-package based organization should not be the one we provide to scientists.  When playing the role of a workflow engineer, though, I'd like to be able to find all actors that are loaded in the JVM, via the library pane and search dialog, without specifically creating metadata files for each actor.  From there I could easily further annotate the actors I've created so that they show up in a more meaningful place in the library.  It'd be totally cool if the package-based organization were hidden by default.

All:  Has anyone spelled out the usage scenarios for the KAR system, the Library, and repository showing how sharing of actors and actor metadata is expected to work, end-to-end?  I.e., from actor creation, through customizing one's own library, and sharing customizations of actors with others via the repository?  And including situations such as the destruction of one's .kepler directory?

Re: Build System Requirements

Posted by Chad Berkley at October 22. 2008

Ahh, I see what you're getting at I think.  I actually wasn't talking about putting kar files themselves into SVN at all.  Just the source that makes up the distributed versions of the kar.  The kar files are currently generated on-the-fly and are not in the repository.  If a user makes a change to an actor, they can create a new kar file and upload that to the actor repository (or to their local disk), but not to SVN.  

I think one thing you're missing is that you're still assuming that all actors have java source when this is not the case.  Some actors are just XML files.  Take a look at Sinewave for an example.  Your example of people changing the actors and getting a new version of the actor without checking it in anywhere makes me think more about the ID process though.  This has been a thorn in my side for some time.  How do you get a unique ID for these types of objects when you might not have an internet connection?  I guess we could do some sort of unique hash of the actor and use that.  

As far as repository structure though, I don't think the runtime actor creation/ID problem really plays into how we structure the repository to accommodate actor authoring.  I think there still needs to be a place in the repository where actors live, especially those that are in our core release(s) of kepler.  I'm still of the opinion that actors should just be treated like other modules and have their own repository structure.  Whether there is one actor in the structure or 10, I think is debatable (and to tell you the truth, I don't think is *that* important), but I just want to get a general repository structure outlines so we can start moving stuff around and get the build system(s) working with the new structure.

Here's a link to the documentation we had when we developed the kar file format:  http://www.kepler-project.org/Wiki.jsp?page=KSWEncapsulationSpecification  Note they used to be called "ksw" files.  This documentation could probably be flushed out a bit more, but I think that might be the role of the framework team when we start working on the core functionality.

 

Re: Build System Requirements

Posted by Daniel Crawl at October 23. 2008

Hi Chad and Tim,

I've written a tool to help automate the creation of KAR momls and manifests. It uses the existing code that gets executed when one manually exports an actor on the canvas, except it runs from the command-line, can do more than one actor at a time, and includes the documentation.

Perhaps this could be used in the new build system to reduce the amount of metadata stored in the repository for each actor. We would still need a mechanism to provide the LSIDs and semantic annotations, but everything else in the KAR could be automatically generated. (Of course XML actors, and actors with additional ports, special default values, customized documentation, etc., would still need to be done by hand).

If you think it'd be useful, I can work on cleaning up the code and adding to svn.

  --dan

Re: Build System Requirements

Posted by Timothy McPhillips at October 24. 2008

Dan, this sounds extremely useful to me.  I'd like to be able to use this tool now, as well as possibly see it incorporated into the build system as you suggest.  Very cool!

Re: Build System Requirements

Posted by Bertram Ludaescher at October 27. 2008

All, 

I'd like to also suggest to revisit the conceptual model of actors, actor configurations etc. and clarify some of the notions.

A little bit terminology might go a long way. For example, it seems that we have quite a few actors who can be *configured* in various ways (to have or not have certain ports, to have certain default parameter values or not etc.) and they can be *annotated*. Many actors have a "Java core", but then some don't.

It seems we're having a bit of an "actor identity crisis" here... ;)

Let's focus for the moment only on Java actors. Say you have an actor X.java. So this code is the "heart" of actor X. Now various people start using this actor, creating different (semantic) annotations, different default parameter values etc. Let's call these different configurations of the actor X.c1, X.c2, X.c3. Now a user starts to employ X.c2 but then decides she wants to change part of the configuration c2. So we have X.c2'. So far so good.

So let's dump all these new "actors" (or shall we say, "configured actors"!?) into a repository -- here we go:

repository = { ..., X, X.c1, X.c2, X.c2', X.c3 ... }

So far so good. Now the author of X sees that a bug needs to be fixed in X, creating X' from X.

What happens now to X.c1, X.c2 etc?

My suggestion would be to keep the existing configuration as they are, referring back to the old (possibly buggy or more limited) version X. But it would be wonderful if the owners of X.c1, X.c2 etc would learn of the *possibility* to upgrade their internal "X core" from X to X'.

In summary, it seems, at least for Java actors, a distinction between the actor *code* and actor *configurations* could be helpful.

A versioning system should be able to "do the right thing" and allow evolution of the code base and of configurations largely separately, while keep track of dependencies and version histories.

Does that make sense? I have an MS student (Erick) who is beginning to look into these matters...

Your comments would be much welcome..

cheers

Bertram

Re: Build System Requirements

Posted by Christopher Brooks at October 28. 2008

Ok, sorry for the late arrival.  I read Chad's proposal which seemed quite good.  Like David, I pretty much agree with everything Chad says, which is good.

 

The one item which might be an issue concerns having every actor be a module.  I think this might be too fine grained.  I do see that Chad is saying a module _may_ consist of just one actor, but a module may consist of multiple actors.

 

In thinking about module granularity, it seems like authors want their work to be in very fine grained modules and users want large, fairly monolithic modules.  I see this when people one Ptolemy as one module, but I say, "But ptolemy consists of several products: gui, headless, plotter, hyvisual, ptiny, visualsense, viptos".  There is this inherent tension about module size.  I think Aaron or someone suggested that modules should be large and modules should only be split up to create deliverables.  I like this notion.

 

One pattern I'm looking at is having a few smaller, fine grained modules at the base of ptolemy and then some fairly large modules to support actor semantics and common groups of actors and then fine grained modules to support special actors that use third party features.

 

About Bertram's discussion about actors and configurations, we did quite a bit of work with Actor Oriented Classes, where Edward basically applied object oriented inheritance to actors.  See http://chess.eecs.berkeley.edu/pubs/314.html.

This work covers some of the issues of how composite actors get instantiated and subclassed over time.  It might be useful to look at composite actors to get some understanding of how atomic actors need to behave.

The Ptolemy documentation system also has this element of actors getting annotated with comments that is interesting where instances have comments attached to them.

One issue is that most people quickly get flustered with a complex versioning system.  There are just too many choices and possibilities.  I'm not sure how to avoid these issues.

 

_Christopher

Re: Build System Requirements

Posted by Timothy McPhillips at October 29. 2008

Yesterday, a few of us chatted on the phone about some of the ideas mentioned above.  Here's a summary of some of the things we discussed:

1.  Chad is experimenting with splitting the source code and resources at the trunk of Kepler into two modules, a core module and a "non-core" module.  (Chad, I assume one thing you want to demonstrate is that the core module has no dependencies at compile time on the non-core module?)

2.  We would like the internal directory structure of modules in the repository to be standardized across the core module and all extensions, standard and otherwise.  The build system will then be able to treat all modules uniformly (and will not need to restructure directories on the developer's machine in order to achieve this uniformity).  David is going to make the next revision to the directory structure proposed at https://dev.kepler-project.org/developers/teams/build/systems/build-system/build-system-ideas.

3. We agreed that the directory structure within modules should be flatter than that currently used by the modules in https://code.kepler-project.org/code/kepler/modules/.  (We no longer see a significant advantage in adopting the Maven standard, for example.) There should be clear places to put Java source code, third-party jars, native libraries and executables, source code for native libraries and executables, unit tests, workflow tests, and resources including icons for actors.

4. There could be advantages in making it clear in the source code repository what modules represent the core of Kepler and those standard extension modules that would ship by default with a new release of Kepler.  It could make it easier for developers to know what modules to check out when working with or extending the standard Kepler base system.  And it could simplify applying access control to these modules.

5.  Currently, the modules in the repository can be tagged and branched independently.  In general this is very useful.  We are currently using the svn standard of using directories named 'trunk', 'tags', and 'branches' to identify the trunk for a module, and to group the tags and branches for it; and grouping these three directories at the same level within each module.  An alternative would be to put all of the trunks for modules together in a single directory such that they can be checked out together via a single svn command (without getting any of the tags or branches), and put the branches and tags elsewhere.  (One disadvantage of this would be that tags and branches would have to be named to indicate what module they are tags or branches of, and this could lead to difficulties when modules are renamed.)

6.  The new build system is flexible with respect to the locations of particular modules in the repository.  They do not necessarily need to be organized uniformly with respect to each other--modules managed by the build system do not even need to be stored in the central repository.  There may be good reasons to group modules by topic area or contributing project.  At the same time, more standardization of the locations of things in the repository, where to find branches and tags for modules, etc, would make it easier for everyone to see what is going on and to share their developments with others.

Re: Build System Requirements

Posted by Daniel Crawl at October 29. 2008

Does #2 apply to Ptolemy?

Re: Build System Requirements

Posted by Timothy McPhillips at October 29. 2008

I don't think it is necessary to standardize directory structures across Kepler and Ptolemy such that Ptolemy looks like a Kepler module.  The Kepler repository and build system are meant for development of Kepler and extensions to it, and I wouldn't expect developers to develop new features in Ptolemy using this system.  So loose coupling to Ptolemy should be fine, I think.

What are your thoughts?

Previously Daniel Crawl wrote:

Does #2 apply to Ptolemy?

 

Powered by Ploneboard
Document Actions