Developer Unconference at Linux Foundation DDF June 2019



Followup Developer Unconference Meeting:

Meeting Join URL:  https://zoom.us/j/407090817

Meeting Date/Time:    25/07/2019    1:00pm (UTC/GMT)

Meeting cancelled this month.

Due to:

  • No current agenda items, until TSC have seen up-to-date document.

  • Waiting on response in regards to communicating document to TSC.



Action Points:

@Gareth Roper to schedule monthly unmeeting. The last Thursday of every month.

@Gareth Roper to clean up comments from meeting, shown below. See below point.

@Gareth Roper to create document for TSC. Document draft has been created, finialising still.



Meeting Date/Time:    27/06/2019    1:00pm (UTC/GMT)

Agenda: 

  1. Welcome. Covered

  2. Go over previously discussed topics. Covered

  3. Review topics that were not discussed. Covered

  4. Add any additional topics. Covered

  5. Decide on how to proceed with the Developer Unconference. Covered

Action Points:

@Gareth Roper to schedule monthly unmeeting. 

@Gareth Roper to clean up comments from meeting, shown below. 

@Gareth Roper to create document for TSC. 





Meeting at DDF June 2019

Date: 2019-06-12

Time: 08:00 - 10:00 UTC

Discussed Topics: 

1: Setting up developers ENV - making it easy ....ONAP

        The Good  :   We have a lot of people out there that can do this.

                            Whats there means people can get going.

        The Bad  :  

                       - For new people coming in (new company) it is difficult - they need to do it themselves

                       - Multiple projects setup together is difficult - nature of a big system.

                       - Duplication of places issues are recorded.

        The things we can Improve : 

                      - Not all projects have readmes - Each project should have one.

                      - Each project should have an owner - The PTL is the owner 

                      - This info is in info.YML - could do this at a per module level - or down to the level the project thinks this is appropriate 

                      - Need to keep this updated as people move

                      - Suggestion - Lazy Dog - "Repeated task documented" - question is where to put them -

                      - Lazy dog per project / by release    --  tricky to keep up to date.

                          - e.g. sharp title - "Error Code title" + workaround (NOTE: Jira will also record it.

2: Tutorial is there - but its out of date.....could there be better tutorials developed.

    The Good:  Generally existing people can help new people coming    

    The Bad: 

                      - Tutorials are "Woeful"

                      - Hard to get started - but once up and running it is OK.        

    The things we can Improve : "How does it breath".

                       - We should make a tutorial for developers joining ONAP - 

                       - We need to reference the various info tools, - Owner,  info.YML, lazy dogs etc. Top level info to get started

                       - We could archive old tutorials.

                       - Important that new people reach out, start participating, and that existing people will support new people ---- big projects can spread on-boarding support around

                            - Common  gain if more people working on similar stuff.

                       - This is less work than tutorials on everything  

                       - Work on a getting started page - can update existing one.

                       - Communication policy

                            - Rocket chat - not widely used by every user

                        - Discuss list is the official comms forum

                        - Weekly meeting is primary focus

                        - Component Tutorials for new developers :

                             - These are needed, and need to be updated periodically

                Message to PTLs that this needs some capacity 



3: Project Scheduling + getting adequate coding time

    The good :

         - ElAlto is helping - giving breathing space 

    The Bad :

        - Dublin development window was very tight

        - Late Epic freeze - short coding window

    Improvements :

        - M0 needs to be M0.

        - Be formal about Epics closed at M0 to ensure development time

            

4: Jira - marking them easy for beginners

List of people you can contact in regards to specific components: 

https://lf-onap.atlassian.net/wiki/display/DW/Resources+and+Repositories#ResourcesandRepositories-ActiveandAvailableInventory





This is gold standard badging: https://bestpractices.coreinfrastructure.org/en/projects/1197?criteria_level=2#changecontrol

- We can move it earlier

- Common labeling convention would help.



5: Unit testing

    The Good:

         - There are some best practices

    The Bad:

         - There are a lot of bad tests out there

    Improvements : 

        - Mutation tests - can identify in effective tests

        - Coverage on new code - expectation is new code always has effective coverage. This is not enforced in tools, but could be turned on.

          



Next steps

    Book a slot a TSC and feedback  a summary of this meeting to them.

    Do as a one off for now. & feedback to TSC and PDLs. ACTION : @Gareth Roper to call next meeting



    Consider Schedule a monthly "Developers UnMeeting" to follow up on this.

        Monthly feedback to TSC and PTLs

    Informal for now. 





______________________________________________________________

Topics identified in the meeting and for further discussion

______________________________________________________________

  • Test ENVs  --  "Hello world"ing 

Later Discussion-



  • Documentation -- Developer focused 

Tagging documents. Especially related to the old documentation. 

Tagging each document with a specific release.

Checking old tags.

*Readmes in GIT repositories. Projects missing readmes and some are out of date.

CSIT documentation needs to be improved. 

        RST format

        Old / out of date docs

        Wiki maintenance



  • Code Review Processes

        Variety of these across projects and contributors

       See this JIRA TSC-69: TSC Policy on Code Reviews and CommitsClosed

Having multiple colleagues in the same component/project leads to easier reviewing but potentiall causes rubber stamp

A committer will need to check the review before a merge, this should help to alleviate the issue. However there have been situations in the past where this has not stopped the issue.

CSIT suites used as part of the review process, has been discussed previously. Resource issues may be a blocker on this. 

CSIT tests should be locally executed before a review is created.

Sonar analysis before the code review. Without analysis can cause issues after merge.

Running these tests can avoid multiple time wasting issues down the line.

Large code dumps make it extremely difficult to review for people not working on the project.

        

  • Sonar Q / Check style

        reviewer can not see code coverage easily

        enforcement 

        build in checks - samsung proposal

Sonar analysis before the code review. Without analysis can cause issues after merge.

Who has the access to define the sonar validations? 

Need to see that the commit doesnt introduce any issues before it is committed.

Probably LFIT release engineering have the ability to change the thresholds. 

Threshold values needs to be a TSC decision.

Not build breaking as changing them could be difficult. 



  • Kotlin language - bringing it in

Further discussion



  • Reflecting Jira ticket status in Gerrit

Automated Jira status change based on Gerrit status. (When review is added should change the Jira status?)

Should not automatically close tickets, but maybe a different status.

A lot of tasks are not in the closed status. 

Possibly every Jira task equals 1 gerrit review. Requires small Jira tasks.

Possibly use the "Submitted" or "Delivered" status after review created/reviewed.





  • Project developed in a company - and then big code delivery to ONAP

Don't know what business func is being contributed across multiple sub projects. Suddenly big chunks of code appear - community not aware of all activity

Large code dumps make it extremely difficult to review for people not working on the project.

Seed code into new project is more understandable. 

Large amounts of code added to existing project can be extremely "disrespectful" due to the ripple effects and consequences down the line.

Situations of 1 person committing and merging the code also.

Multiple experiences of this issue.

Possibly require documentation before a commit is merged, will only mitigate the issue.

Should there be a standard for maximum size of commits, and should you need to exceed it what are the guidelines.

Effectively not reviewed as its near impossible to fully review large code dumps.

Sonar and CSIT testing before review may only mitigate problems, not solve them.

A solution for this needs to be investigated.

No reason why this cant be applied to a new project not just existing ones. 

        

  • Bring Pairwise testing early - CSIT tests

CSIT suites used as part of the review process, has been discussed previously. Resource issues may be a blocker on this. 

CSIT tests should be locally executed before a review is created.

CSIT documentation needs to be improved. 

Pairwise = Testing multiple components together.

CSIT does simulate a lot of ONAP, so can miss osme issues

Possibly no things are simulated, test a fully working system. (assuming no resource issues)

A large amount of time can be spent getting the ONAP and the CSIT tests to run effectively.





  • Different deployments platforms for testing  - OOM and cloudify makes testing difficult (DCAE)

further discussion

Documentation will help here ofcourse

OOM difficult to use for beginner



  • Do we have a miniature version of DMaap for testing - its very large

Further discussion 

Documentation



  • Policy testing wrote their own simulator ---   are simulators a solution

Something has to be simulated, a full lab test can cause resource issue

Investigate solution for full lab test.

Robot and CSIT tests run daily, 2 completely separate repos/code bases. Portal component just copied the robot tests across.

In practice no one has reused the same test code for both code, it has been duplicated and adjusted. 

Should reduce as much as possible, no best practices currently or examples, apart from common library which doesn't contain much yet.

Can help to reduce after commit errors

Testing negative situations with simulators is much easier than testing with a full ONAP. Simulators will be needed. 

You will need to write specifc test code for a full ONAP. Less control and ability. 

Verfied in the use case testing, but this will only be positive testing.

We should have both simulators and full ONAPs for testing, best of both worlds.



  • Common library for JSON POJO sims / mocks. An API library for managing common events between systems.

A common library would eliminate time wasted. 

How do we document this, how to find correct documentation etc.





Further Topics:



Check Style and Formatting Standardization 

Check style is extremely different for different projects.

Many projects that have no checkstyles example: AAF has no code standard

Should be identical for all sub projects of onap. 

Will help across the board, reviews further documentation.

An ONAP checkstyle should be standardized.

When you import the checkstyle template into Ecilpse it seems to change.

Running the checkstyle through an Maven build appears to be more accurate than through importing to Eclipse/Intellij.

Formatting checked into A&AI. Instructions to add to Eclipse available. 

Should have a centralised area for checkstyle and formatting.

Should we be updating our checkstyle when Google adds features et. to the checkstyle template that ONAP uses a modified version of.