There are many ways you can contribute to the SkyWalking community.
- Go through our documents, and point out or fix a problem. Translate the documents into other languages.
- Download our releases, try to monitor your applications, and provide feedback to us.
- Read our source codes. For details, reach out to us.
- If you find any bugs, submit an issue. You can also try to fix it.
good first issueissues. This is a good place for you to start.
- Submit an issue or start a discussion at GitHub issue.
- See all mail list discussions at website list review. If you are already a SkyWalking committer, you can log in and use the mail list in the browser mode. Otherwise, subscribe following the step below.
- Issue reports and discussions may also take place via
email@example.com. Mail to
firstname.lastname@example.org, and follow the instructions in the reply to subscribe to the mail list.
All the following channels are open to the community.
- Submit an issue for an issue or feature proposal.
- Mail list: email@example.com. Mail to
firstname.lastname@example.org. Follow the instructions in the reply to subscribe to the mail list.
- Submit a discussion to ask questions.
Become an official Apache SkyWalking Committer
The PMC assesses the contributions of every contributor, including their code contributions. It also promotes, votes on, and invites new committers and PMC members according to the Apache guides. See Become official Apache SkyWalking Committer for more details.
For code developer
For developers, the starting point is the Compiling Guide. It guides developers on how to build the project in local and set up the environment.
After setting up the environment and writing your codes, to facilitate integration with the SkyWalking project, you’ll need to run tests locally to verify that your codes would not break any existing features, as well as write some unit test (UT) codes to verify that the new codes would work well. This will prevent them from being broken by future contributors. If the new codes involve other components or libraries, you should also write integration tests (IT).
SkyWalking leverages the plugin
maven-surefire-plugin to run the UTs and uses
to run the ITs.
maven-surefire-plugin excludes ITs (whose class name starts with
and leaves them for
maven-failsafe-plugin to run, which is bound to the
Therefore, to run the UTs, try
./mvnw clean test, which only runs the UTs but not the ITs.
If you would like to run the ITs, please set the property
skipITs to false
as well as the profiles of the modules whose ITs you want to run.
E.g. if you would like to run the ITs in
./mvnw -Pbackend clean verify -DskipITs=false,
and if you would like to run all the ITs, simply run
./mvnw clean verify -DskipITs=false.
Please be advised that if you’re writing integration tests, name it with the pattern
IT* so they would only run when property
skipITs is set to false.
Java Microbenchmark Harness (JMH)
JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM.
We have a module called
microbench which performs a series of micro-benchmark tests for JMH testing.
Make new JMH tests extend the
to customize runtime conditions (Measurement, Fork, Warmup, etc.).
JMH tests could run as a normal unit test. And they could run as an independent uber jar via
java -jar benchmark.jar for all benchmarks,
java -jar /benchmarks.jar exampleClassName for a specific test.
Output test results in JSON format, you can add
-rf json like
java -jar benchmarks.jar -rf json, if you run through the IDE, you can configure the
-DperfReportDir=savePath parameter to set the JMH report result save path, a report results in JSON format will be generated when the run ends.
More information about JMH can be found here: jmh docs.
End to End Tests (E2E)
Since version 6.3.0, we have introduced more automatic tests to perform software quality assurance. E2E is an integral part of it.
End-to-end testing is a methodology used to test whether the flow of an application is performing as designed from start to finish. The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the right information is passed between various system components and systems.
The E2E test involves some/all of the OAP server, storage, coordinator, webapp, and the instrumented services, all of which are orchestrated by
Since version 8.9.0, we immigrate to e2e-v2 which leverage skywalking-infra-e2e and skywalking-cli to do the whole e2e process.
skywalking-infra-e2e is used to control the e2e process and
skywalking-cli is used to interact with the OAP such as request and get response metris from OAP.
Writing E2E Cases
- Set up the environment
- Set up
- Set up
yq(generally these 2 are enough) and others tools if your cases need. Can reference the script under
- Orchestrate the components
The goal of the E2E tests is to test the SkyWalking project as a whole, including the OAP server, storage, coordinator, webapp, and even the frontend UI (not for now), on the single node mode as well as the cluster mode. Therefore, the first step is to determine what case we are going to verify, and orchestrate the components.
To make the orchestration process easier, we’re using a docker-compose that provides a simple file format (
docker-compose.yml) for orchestrating the required containers, and offers an opportunity to define the dependencies of the components.
Follow these steps:
- Decide what (and how many) containers will be needed. For example, for cluster testing, you’ll need > 2 OAP nodes, coordinators (e.g. zookeeper), storage (e.g. ElasticSearch), and instrumented services;
- Define the containers in
docker-compose.yml, and carefully specify the dependencies, starting orders, and most importantly, link them together, e.g. set the correct OAP address on the agent end, and set the correct coordinator address in OAP, etc.
- Define the e2e case config in
- Write the expected data(yml) for verify.
All e2e cases should under
skywalking/test/e2e-v2/cases. You could execute e2e run command in
e2e run -c test/e2e-v2/cases/alarm/h2/e2e.yaml
We expose all logs from all containers to the stdout in the non-CI (local) mode, but save and upload them to the GitHub server. You can download them (only when the tests have failed) at “Artifacts/Download artifacts/logs” (see top right) for debugging.
NOTE: Please verify the newly-added E2E test case locally first. However, if you find that it has passed locally but failed in the PR check status, make sure that all the updated/newly-added files (especially those in the submodules) are committed and included in the PR, or reset the git HEAD to the remote and verify locally again.
The SkyWalking project supports various extensions of existing features. If you are interesting in writing extensions, read the following guides.
This guides you in developing SkyWalking agent plugins to support more frameworks. Developers for both open source and private plugins should read this.
- If you would like to build a new probe or plugin in any language, please read the Component library definition and extension document.
- Storage extension development guide. Potential contributors can learn how to build a new storage implementor in addition to the official one.
- Customize analysis using OAL scripts. OAL scripts are located in
config/oal/*.oal. You could modify them and reboot the OAP server. Read Observability Analysis Language Introduction to learn more about OAL scripts.
- Source and scope extension for new metrics. For analysis of a new metric which SkyWalking hasn’t yet provided, add a new receiver. You would most likely have to add a new source and scope. To learn how to do this, read the document.
OAP backend dependency management
This section is only applicable to dependencies of the backend module.
As one of the Top Level Projects of The Apache Software Foundation (ASF), SkyWalking must follow the ASF 3RD PARTY LICENSE POLICY. So if you’re adding new dependencies to the project, you should make sure that the new dependencies would not break the policy, and add their LICENSE and NOTICE to the project.
We have a simple script to help you make sure that you haven’t missed out any new dependencies:
- Build a distribution package and unzip/untar it to folder
- Run the script in the root directory. It will print out all new dependencies.
- Check the LICENSE and NOTICE of those dependencies to make sure that they can be included in an ASF project. Add them to the
- Add the names of these dependencies to the
tools/dependencies/known-oap-backend-dependencies.txtfile (in alphabetical order).
check-LICENSE.shshould pass in the next run.
The performance profile is an enhancement feature in the APM system. We use thread dump to estimate the method execution time, rather than adding multiple local spans. In this way, the cost would be significantly reduced compared to using distributed tracing to locate the slow method. This feature is suitable in the production environment. The following documents are key to understanding the essential parts of this feature.
- Profile data report protocol is provided through gRPC, just like other traces and JVM data.
- Thread dump merging mechanism introduces the merging mechanism. This mechanism helps end users understand profile reports.
- Exporter tool of profile raw data guides you on how to package the original profile data for issue reports when the visualization doesn’t work well on the official UI.
If you’re a committer, read the Apache Release Guide to learn about how to create an official Apache version release in accordance with avoid Apache’s rules. As long as you keep our LICENSE and NOTICE, the Apache license allows everyone to redistribute.