• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
Locked
0

FlexCPD & Standard Headers

Adobe Employee ,
Nov 30, 2009 Nov 30, 2009

Copy link to clipboard

Copied

First of all, congrats on the release of FlexCPD - it's a great tool!

Looking at my results I'm noticing that it flags all my standard comment headers, which are required on every single source file, as duplicate code. Any suggestions as to how to keep these out of my results? I'd prefer not to mess with my minimum token count to accommodate for these.

Or maybe comments should be excluded from the analysis?

Thanks,

Brian

TOPICS
FlexPMD

Views

540

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines

correct answers 1 Correct answer

Adobe Employee , Nov 30, 2009 Nov 30, 2009

Hi Brian,

Ok. I took a look and it appeared that a fair amount of the tokens in the stream was noise (/**, \n, {, }, ....).

So I removed those tokens from the token stream and that lead to much more accurate results.

It will be relased in the next release.

Xavier

Votes

Translate

Translate
Adobe Employee ,
Nov 30, 2009 Nov 30, 2009

Copy link to clipboard

Copied

Hi Brian,

I noticed that as well.

I need to dive into it in order to exclude comments from the token stream (comment should definetly not be included in this stream).

Feel free to create an issue in JIRA so that we don't forget it

Thanks

Xavier

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 30, 2009 Nov 30, 2009

Copy link to clipboard

Copied

LATEST

Hi Brian,

Ok. I took a look and it appeared that a fair amount of the tokens in the stream was noise (/**, \n, {, }, ....).

So I removed those tokens from the token stream and that lead to much more accurate results.

It will be relased in the next release.

Xavier

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines